Chapter V:
General Discussion and Conclusion

↓118

One explanation of why the effect of too much choice does not reliably occur might be that in the experiments where the effect could not be found, the number of options or the difference in number between the conditions was still not large enough. Due to the vague definition of what constitutes too much choice, it can never be ruled out that an effect would eventually be found with an even higher number of options. Yet this explanation is challenged by the fact that other scholars have found the effect with the number of options I used and that a doubling in assortment size from 40 to 80 in the charity study did not have any effect on the motivation to choose. Also, according to the results of the meta-analysis, there is no relationship between the difference in assortment sizes and the effect size.

As a consequence, it seems worthwhile to discuss other potential explanations and, perhaps most importantly, some theoretical perspectives that might help clarify the too much choice effect. Toward this goal, in the following I will aim to link the too-much-choice effect to previous research on decision making. As I will show, the effect can be placed within the broader frameworks of information overload, decision avoidance, and adaptive decision making, for each of which there is a considerable amount of research.

According to Simon (1990), behavior is shaped by the interaction between the human information-processing system on one side and the properties of the environment on the other. Starting from this general notion, in the following I will identify potential boundary conditions on the side of the decision maker, on the side of the environmental structure (incorporating the choice set or assortment), and on the interaction between the two. Along the way, I propose a number of hypotheses and boundary conditions that can be explored in future research.

Complementary features as a moderator 

↓119

In Chapter I, I discussed the notion of trade-offs and negative attribute correlation as an important environmental structure to elicit a better understanding of choice overload. In extension to this, scholars in the past have argued that the content of the attributes on which options differ also plays an important role in the amount of decisional conflict people perceive and also for the emergence of choice overload.

Feature complementarity

Based on a series of experiments involving hypothetical choices among different consumer products, Chernev (2005) found that when options differed along what he called “complementary” features, an increase in assortment size from two to five led to higher choice deferral. On the other hand, when features were “noncomplementary,” choice deferral decreased with an increase of the assortment size. In Chernev’s terms, features complement each other if a combination of them increases the attractiveness of the option. For example, in his experiment, participants chose among holiday resorts. In the noncomplementary condition, the resorts differed by their location (e.g. Bermuda, Bahamas, Antigua, etc.). In the complementary condition, they differed by what they offered to their guests (e.g. fantastic beaches, convenient transportation, exceptional service, etc.). In the latter case, an ideal resort would offer all features and thus choosing any one of the available options implies that certain attractive features have to be forgone. On the other hand, if options differ along noncomplementary features, an increase in assortment size increases the probability that decision makers will find something that matches their preferences.

Chernev further conjectured that the more resorts there are that have unique and complementary features, the further the deficits of the other options are highlighted and the less attractive it becomes to choose any of the options. To test this hypothesis, similar to my charity study, Chernev asked participants in his study to give a reason for their choice. What he found was that in the condition with complementary features, participants were more likely to mention that they missed a certain feature, and this tendency was even stronger when choosing among five options as compared to choosing between two options.

Attribute “alignability”

↓120

Chernev’s (2005) results match up with the findings of Zhang and Fitzsimons (1999) who found that people were more satisfied with the process of a choice when options differed on noncomplementary features (Zhang and Fitzsimons called them “alignable differences”) as compared to choices between options that had different complementary features (referred to as “nonalignable differences”).

In Zhang and Fitzsimon’s series of four experiments, participants made hypothetical choices among three different types of fictitious microwave popcorn. One group of participants chose among popcorns described on complementary features such as “not likely to burn,” “easy to swallow,” or “few kernels left unpopped.” The other group chose among popcorns described on noncomplementary features such as the origin of the corn (Southwest, Midwest, Northwest) or the size of the kernels (small, medium, large). Participants who chose from the complementary set were subsequently less satisfied with the choice process, such that they were more frustrated and said they would be less likely to make a choice. Zhang and Fitzsimons argued that this is because it is more difficult to compare the unique, complementary features due to the lack of a comparison standard.

The findings of Zhang and Fitzsimons were subsequently confirmed in an experiment that also involved choices among varying numbers of microwave ovens (Gourville & Soman, 2005). In this study, participants were given a choice between one single oven of brand A and a varying number of ovens of brand B in a between-subjects design. The number of B ovens was subject to experimental manipulation and differed between 1 and 5. In one condition, the B ovens differed along the alignable or noncomplementary attributes such as capacity and price, and in the other condition, the B ovens differed with regard to nonalignable or complementary features (e.g. one oven had a moisture sensor, the other had programmable menus, etc.). The description of the oven from brand A was the same across conditions. Gourville and Soman found that when the B ovens were described on complementary features, the choice share of oven A increased with the number of B ovens. When B ovens were described on noncomplementary features, the choice share of oven A decreased with the number of B ovens being offered.

Critical evaluation of complementary features as moderators

↓121

Taken together, the results of all these studies suggest that trade-offs due to differences along nonalignable or complementary attributes are a necessary precondition for the effect of too much choice. Yet, in their experiments, Chernev as well as Gourville and Soman explored small assortments ranging between two and five options. While these researchers claim that their findings can be generalized to larger assortment sizes as well, so far there is no direct empirical evidence that supports this hypothesis.

In Iyengar and Lepper’s jam and chocolate studies, an effect was found despite the fact that options mainly differed on noncomplementary features such as flavor or type of chocolate. Besides, there is no reason to believe that the differences in features were of a different quality in the studies that did not find an effect. Also, if the presence of complementary features is a sufficient precondition, the too much choice effect should be widespread, because many choice sets in the real world are characterized by options that have unique advantages.

Too much choice effect as a special case of information overload

In earlier chapters I repeatedly pointed out that the research on choice overload is remarkably similar to the research on information overload. In the following, I will discuss to what extent choice overload is just a special case of information overload and how much insight can be gained by looking into that literature.

↓122

As outlined in Chapter I, Miller (1956) found that decision makers have finite limits to the amount of information they can assimilate and process during any given moment. The information overload paradigm states that if these limits are exceeded, decision makers become confused and make poorer decisions.

Limited channel capacity 

In a series of experiments, Milinski (1990) found that limitations in the amount of information that is processed within a certain time are not unique to humans but can also be found among other animals. In his experiments, Milinski found that when given a choice between a large swarm of 40 waterfleas and a small one with 2 waterfleas, hungry sticklebacks preferred the large swarm, whereas less-hungry sticklebacks preferred the small swarm. Milinski argued that hunting in the large swarm requires more attention and concentration because of the difficulties of tracking one of the many similar-looking targets. Because of this, a stickleback that hunts in the large swarm probably cannot pay sufficient attention to a suddenly approaching predator of its own. Earlier, Milinski (1984) found that frightened sticklebacks (hungry or not) preferred the small swarm and sticklebacks that hunted the large swarm were less likely to detect an approaching predator. In this experiment, the predator was a model of a king-fisher bird that was flown over the fish tank. In an analogous experiment on humans, Milinski (1990) gave participants sheets of papers with different numbers of white dots. Their task was to punch 20 white dots with a needle as fast as possible. The time participants needed to punch 20 dots increased with the density of dots on the sheet and this increase was pronounced when participants were visually distracted with occasional flashes from a light bulb. Building on the notion of limited channel capacity to process information, previous research on information overload (Jacoby, Speller, & Kohn, 1974a) looked at the potential influence of assortment size and assortment structure on confusion, satisfaction, and dysfunctional behavior on the part of the decision maker.

Previous research on information overload

In an early study on information overload, Jacoby et al. (1974a) and also Jacoby, Speller, and Kohn Berning (1974b) compared choices among up to 12 different bogus consumer products that were described on a varying number of attributes. Participants were instructed to examine and evaluate all available information. Information load was operationalized as the number of products multiplied by the number of attributes. What they found was that there was an inverted U-shaped relation between information load and the “accuracy” of the decision. Accuracy was defined by measuring the difference between the chosen option and the option that would have been chosen based on the weighted additive combination of all attribute values. In their approach, the weights were taken from individual importance ratings that were assessed for each attribute prior to the actual choice. Based on this definition, too little, but more importantly, also too much information led people to make less accurate decisions.

↓123

The findings on information overload by Jacoby and his colleagues (1974a, 1974b) were subsequently heavily criticized on theoretical as well as methodological grounds. The main criticisms were that the original study did not control for chance factors and that the number of products was not sufficiently high (Malhotra, 1984; Malhotra, Jain, & Lagakos, 1982). Other critics argued that a weighted additive model may not have been an appropriate measure for choice accuracy in the first place (Meyer & Johnson, 1989). Because of the difficulty in defining a good decision when it comes to preferential choice, Meyer and Johnson instead called for consistency-based measures such as the probability of picking a dominant option. In a reanalysis of Jaboby et al.’s (1974a) data, Malhotra et al. (1982) completely dismissed Jacoby et al.’s evidence of information overload because the original authors did not control for the fact that the mere chance of randomly choosing a single “best” option decreases with the number of options to choose from. However, in a methodologically more sound study, Malhotra (1982) nevertheless confirmed the hypothesis of Jacoby et al. In this study, Malhotra increased the maximum number of both options and attributes to 25, statistically controlled for chance factors, and also included a self-reported measure on subjective choice overload. He nevertheless maintained a weighted additive model as normative yardstick to measure choice accuracy. He found that on average, dissatisfaction with the act of choosing, confusion, the subjective feeling of being overloaded, and the inaccuracy of the choice all increased with more than 15 attributes or more than 10 options. Malhotra (1982) also found that number of options and the number of attributes both contribute independently to information overload.

Critical evaluation of information overload as a moderator

With regard to the too-much-choice effect, the decrease in accuracy of finding the presumably best option due to too much information might be reflected in a decreased satisfaction with the chosen option. Also, if decision makers are able to anticipate this lack of accuracy, it could be that they try to avoid a poor decision in high-information situations by not making any choice at all.

However, in my studies, the amount of information with which the options were described did not seem to make a difference. For instance, in the restaurant study, each restaurant was described on many different attributes, and in the charity study, a considerable amount of information about each organization was provided, and neither yielded a too-much-choice effect. Also, the data obtained from my experiments indicate that independent of the assortment size, the more options (e.g. jam, wine, or music) a participant sampled from an assortment, the higher his or her individual likelihood to choose. For example, in the jam study, participants who tasted more jam and by this gathered more information were more likely to purchase, which indicates that an increase in information led to a higher probability of choosing. Furthermore, Iyengar and Lepper (2000) found the effect based on choices among rather simple options such as jam or chocolate, suggesting that the mere amount of information is not sufficient to explain the occurrence of the effect.

Extensions of the information overload paradigm

↓124

While the early research on information overload focused on the actual number of options and attributes, what ultimately matters is how an assortment is perceived by a decision maker. Beyond the number of options and attributes, recent studies have shown that the perception of information content also depends on many other structural factors such as the (dis)organization of the assortment and the number and distribution of attribute levels, which can be expressed as the entropy of an assortment (Hoch, Bradlow, & Wansink, 1999; Kahn & Lehmann, 1991; Kahn & Wansink, 2004; Lurie, 2002, 2004).

Entropy

In a study by van Herpen and Pieters (2002), the mere number of options was a poor predictor of participants’ subjective variety perceptions, while measures that tapped into structural details of the assortment were good predictors. Each of the 62 participants in their experiment rated a variety of 12 different assortments of bogus products that were characterized by three categorical attributes. The assortments differed in size (ranging from 4 to 16), in entropy, and in the degree of association between attributes. Entropy is a concept borrowed from information theory (Shannon & Weaver, 1949) that in its original usage indicates the number of bits necessary to code a given environment. The entropy I within a categorical attribute A can be calculated as

(5-1)

↓125

where p j equals the proportion of options with attribute level j, m is the total number of attribute levels, and log2 is the logarithm to the base 2. When only a single attribute level is present (e.g. all jelly beans are red), p j equals one and the entropy is zero. Entropy increases with the number of attribute levels and it is highest if all attribute levels occur in equal proportions (e.g. an equal number of red, green, and yellow jelly beans). For assortments with more than one attribute, the entropy measure is commonly added across all attributes (Fasolo, Hertwig, Huber, & Ludwig, 2006),

(5-2)

where A k is the entropy of the kth attribute and l is the total number of attributes within the assortment.

↓126

As already mentioned in Chapter I, besides entropy, van Herpen and Pieters also measured the degree of conflicting attributes within the assortment by calculating lambda coefficients (Goodman & Kruskal, 1954). What they found was that in a linear regression on the perception of variance, the attribute dispersion (entropy) and the conflict between attributes (lambda) together accounted for 62.5% of the variance in people’s perception of variety while the mere number of options only accounted for an additional 3.4% of the variance.

Assortment structure in the real world

For real-world assortments within grocery stores, Fasolo et al. (2006) found that the number of attribute levels strongly correlates with the number of products within an assortment, which suggests that each product tends to add a new attribute level. As a consequence, Fasolo et al. also found a strong correlation between the number of options and the entropy within the assortment. The largest share of the entropy measure across product categories was due to the attributes’ brand and price. Yet for a continuous attribute such as price, it is questionable if entropy, which was initially used for categorical data, is an appropriate measure. This is because if every option has a different value, the entropy is simply a function of the number of options within the assortment and the high correlation between entropy and assortment size comes as little surprise. For continuous variables, dispersion measures such as variance or quartile ranges might be more appropriate.

Entropy affects choice quality

With regard to choices, Lurie (2004, Experiment 1) found that low entropy was a good predictor of choice quality. In his experiment, participants made a hypothetical choice from an assortment of pocket calculators that differed with regard to the number of options and with regard to the entropy as measured in equation 5-2. Choice quality was operationalized as the probability of choosing a dominant option from a set of pocket calculators. Lurie found a main effect of assortment size on the probability of choosing a dominant option but this effect diminished if entropy was taken into account. These results suggest that information structure, not the number of alternatives, is the crucial factor in determining overload.

↓127

In line with this, Lee and Lee (2004) found that the quality of decisions (also defined as the probability of choosing a dominant option) depends on entropy and on the number of attributes rather than the mere number of options. In their experiment, participants chose among different sets of CD players that differed in the number of options (18 vs. 27), the number of attributes on which the CD players were described (9 vs. 18), and the distribution of the attribute values, measured in terms of entropy (high vs. low entropy). Based on a between-subjects design, they found that an increase in the number of attributes and also an increase in entropy both decreased choice quality, whereas the number of options did not have any effect. Likewise, the number of options did not affect participants’ satisfaction with the choice, but satisfaction decreased with the number of attributes. In contrast to the previous findings, though, entropy did not affect satisfaction.

To increase the internal validity and to make it easier for participants to pay attention to all attributes, the inter-attribute correlations in the experiments by Lurie (2004) as well as by Lee and Lee (2004) were set around zero. Thus, a high value of an option on one attribute did not reveal anything about the value of that option on another attribute. Such a choice environment might not be what people are used to, though. In their study on real-world consumer environments, Fasolo et al. (2006) found that in grocery stores many attributes are typically negatively correlated such that a high value on one attribute implies a low value on another attribute. Thus the generalizability of Lee and Lee’s findings to real-world assortments might be impaired.

Critical evaluation of entropy as a moderator

An equal increase in the number of options makes it more difficult to find the best option (and potentially less motivating and less satisfactory) in the high-entropy case as compared to the low-entropy case. Also, if the entropy in an assortment is high, the perception of variety might increase faster with an increase in the number of options as compared to a situation in which the entropy is low. As a consequence, entropy could potentially moderate the effect of too much choice.

↓128

In my experiments, I did not control for the entropy within the assortment, but I did control for the perception of variety that is affected by entropy and thus could be taken as a proxy. While the perceived variety differed among the experimental conditions, in absolute terms the variety was commonly not perceived as extremely high. This also holds true for other studies that found the effect (e.g. Iyengar & Lepper’s chocolate study, 2000). Still, maybe for choice overload to occur reliably, assortments have to be perceived as extraordinarily large and complex; a situation that might only occur if a large number of options is paired with high entropy.

Expedient ordering of the options as a moderator

From yet another perspective on the influence of assortment structure on choice overload, previous research found that the motivation to choose also increases if a clear description of the differences between the options is given, seemingly because the relevant information for comparisons can be perceived more easily and the effort to make a choice is low, and possibly also because it could make justification easier.

The availability of reasonable categories along which options can be ordered and compared becomes more important the more options there are (Anderson, 2006). As an example, one can think of online retailers that invest great efforts to offer alternative ways to search their assortments along several attributes (e.g. price, ratings, or specific features). On the other hand, for a randomized assortment, search clearly gets more difficult as the number of options increases.

↓129

Following up on the notion of adaptive decision making, Payne et al. (1992) showed that more information changes the direction of search from alternative-wise (looking for all of the attribute values for one option before going on to the next alternative) to more attribute-wise (comparing all options on a single attribute before going on to the next attribute). Following up on that finding, Huffman and Kahn (1998) found that the satisfaction with the decision process and the finally chosen option did not depend on the amount of information per se but on how this information was structured. In their experiments, satisfaction with hypothetical sofas and hotels increased if assortments were ordered along their attribute values, presumably because this layout made it easier for the consumer to process the information. Likewise, in a study on patient decision making, Carrigan, Gardner, Conner, and Maule (2004) found that people’s decisions were closer to the predictions of a weighted additive model if information was ordered such that it could be accessed selectively according to individual preferences as compared to a condition in which all information was presented in a predetermined order.

In an early study by Russo (1977), shoppers in a grocery store on average saved 2% of their spending through purchasing cheaper products when all brands within a category were sorted on one list according to their price, as compared to a regular grocery store with separate price tags on each item. Russo argued that providing the price information in a convenient way would make it easier for shoppers to use this information when making a choice.

Critical evaluation of the ordering as a moderator

Considering these findings together, if the assortment structure matches with people’s preferred search and decision strategy, the choice becomes easier and more satisfying. This relationship might be especially pronounced for large assortments. For small assortments, decision makers might be less affected by the order because it is easier to get an overview. Thus, a mismatch between the assortment structure and the decision strategy might be a necessary precondition for the too much choice effect.

↓130

However, in my experiments, the options were not ordered in any sensible way, which should have increased the likelihood of finding the effect. For example, in the jelly bean and the music study, the options were randomly distributed. In the charity study, the options were ordered lexicographically, but the first letter was completely uninformative about the mission of the organization.

Interaction between environment and decision strategies

In my discussion on information overload and its related concepts I mainly focused on structural aspects of assortments that go beyond the mere number of options. These aspects included various types of conflict between attributes and options, the number of attributes, and the distribution of attribute values (entropy) and how the options are ordered. Yet following Simon’s (1990) allegory of a pair of scissors, the structure of the environment is only one of the two blades that need to be considered in order to understand human decision making. The other blade represents the decision strategies used within a given environment and how these strategies might change depending on the situation. According to Simon, both aspects are equally important to understanding, explaining, and predicting decision making.

In Chapter I, I mentioned the notion of adaptive decision heuristics such as satisficing and how the use of these heuristics can shield people from being overloaded with choice. In the following, I will further elaborate on the interaction between decision strategies and the effect of too much choice, showing that the understanding of decision strategies is a key to understanding when and why choice overload occurs.

Weighted additive model as normative standard

↓131

The information overload and entropy literature are not completely mute about decision strategies, because as outlined above, being “overloaded” is commonly defined as a deviation from an allegedly normative standard of a weighted additive decision model. For example, in a study on information overload, Keller and Staelin (1987) defined decision effectiveness as the degree to which individuals obey a weighted additive rule. As stated earlier, such a rule requires that all information be weighted by its importance and then integrated in an additive way to obtain an overall preference or quality value for each option. Following this procedure, the option with the highest value should eventually be selected. Based on this definition, Keller and Staelin found that decision effectiveness decreases once the amount of information surpasses a certain threshold.

A weighted additive rule is a prime example of a so-called compensatory decision rule, because it implies that one or more positive values on one attribute can outweigh one or more bad values on another attribute, and vice versa. As a psychological process model of choice, weighting and adding requires a considerable amount of time to gather and assess all the relevant attributes and attitudes, and computation to combine all this information into an overall judgment of each choice alternative.

Simple heuristics as more appropriate models of choice

Because of these somewhat unrealistic demands that weighted additive rules make on human cognitive abilities, previous research on judgment and decision making has seriously questioned them as reasonable models of human decision making in many common circumstances (Dawes, 1979; Einhorn & Hogarth, 1975). Instead, the research tradition of so-called simple heuristics (Gigerenzer et al., 1999) proposes decision mechanisms that overcome the problems of weighted additive models and other complex decision rules. First, the simple heuristics framework suggests that people are often frugal in terms of the information they assess for a choice and second, it proposes that instead of aggregating many pieces of information by weighting and adding, people make their choicees based on a much simpler yet still effective decision rule. The key assumptions of this heuristics approach are that decision makers have limited time and computational resources (exhibiting what Simon called “bounded rationality”), and that rather than trying to determine “the best” option, they search for something that is “good enough” (Schwartz, 2004; Simon, 1955). There is considerable evidence that people’s decision-making processes can indeed often be characterized as rules of thumb that work reasonably well in many situations (Bröder, 2000, 2003; Gigerenzer & Goldstein, 1996; Payne et al., 1992; Scheibehenne & Bröder, 2007; Scheibehenne, Miesler, & Todd, 2007; Svenson 1979; Wright, 1975).

Adaptive decision making

↓132

For frugal strategies to be effective, the research tradition on simple heuristics further assumes that the decision strategies people use are adapted to the environment. As already mentioned in chapter I, adaptive shifts of strategy that depend on environmental characteristics are a well established finding for which a large body of empirical evidence exists (see Ford, Schmitt, Schechtman, Hults, & Doherty, 1989 for a review).

Despite these findings, it is still a widespread idea that that the accuracy of a decision can be judged by comparing it to a weighted additive model. For example, Bettman, Luce, and Payne (1998) argued that a weighted additive model best reflects people’s preferences and therefore defines a normative yardstick against which the quality of a decision can be compared.

Yet, if we think of decision makers as “adaptive” (Payne et al., 1992), then deviating from a weighted additive model need not necessarily lead to a decrease in decision quality; such decision makers are simply applying a different (arguably more adaptive) heuristic. If a weighted additive model is regarded as one possible strategy among many, it seems peculiar to define its outcome as the normative standard against which the outcomes of other strategies are evaluated. In fact, the research tradition of simple heuristics (Gigerenzer et al., 1999) provides a number of good reasons why a weighted additive model should not be taken as a prime standard for human decision making. If the perspective on human decision making is broadened, for example, by taking into account search costs, computational limitations, psychological feasibility, social constraints, or robustness toward external changes, the normative claim of weighted additive strategies quickly loses ground (Gigerenzer et al.). From this perspective, whether a strategy is normative or rational should not solely depend on “internal” criteria—such as consistency or whether it obeys the rules of formal logic—but rather also on its success within the environment in which it operates.

Simple heuristics shield from information overload

↓133

Acknowledging the importance of adaptive changes in decision strategy to accommodate changes in the environment, Malhotra (1982) conjectured that “a major variable influencing the outcome of overload may be the nature of the decision-making process” (p. 428) as well as individual cognitive abilities. He goes on to acknowledge that individuals adaptively switched their decision strategy to heuristic processing when large amounts of information were presented.

Along the same lines, Jacoby (1984), the founding father of the information overload paradigm, concluded that for most real decisions, decision makers will stop far short of overloading themselves by accessing only a limited amount of the available information and by applying a simple heuristic such as satisficing or elimination-by-aspects. Likewise, Grether, Schwartz, and Wilde (1986) pointed out that the notion of information overload, saying that the amount of information impairs people’s ability to make a sound choice, conflicts with Simon’s idea of satisficing and also with the notion of adaptive decision making.

The notion of an adaptive use of decision strategies to cope with information overload finds empirical support in an early experiment conducted by Hendrick, Mills, and Kiesler (1968), who found a nonlinear relationship between the amount of available information and the decision time. Their experiment followed a 2×2 between-subjects design in which undergraduates were given an actual choice between two or four ties that were described on either 1 or 15 attributes. From the perspective of information overload, the four conditions differed with regard to their information content. The time it took participants to decide between the ties was shortest in the condition with two ties described on 1 attribute and it peaked for the two conditions in which two ties were described on 15 attributes and in which four ties were described on 1 attribute. For the high-information condition of four ties described on 15 conditions, decision time decreased again. Hendrick and his colleagues interpreted this result as meaning that if information load exceeds a certain threshold, people “give up trying to compare the alternatives” and “the choice may be made impulsively” (p. 314). With regard to the recent literature on adaptive decision making one could also regard these results as early empirical evidence for the use of simple heuristics—what the researchers labeled impulsive in fact reflected an adaptive shift toward a fast and frugal choice strategy.

↓134

In a more recent study, Lurie (2004, Experiment 2) found that an increase in entropy (measured as in equation 5-2) led to a more selective search. In his experiment, participants chose among assortments of 16 pocket calculators that were all described on eight attributes but that differed in the number and the distribution of attribute levels. Participants’ information search patterns were tracked by means of a Mouselab setup (Payne et al., 1992). In the high-entropy conditions, people focused more on the most important attributes and they acquired less information in total, which suggest an adaptive shift toward a more fast and frugal decision rule.

Noncompensatory strategies as a mediator

Next, based on the example of a simple noncompensatory heuristic, I will lay out in more detail how fast and frugal heuristics may shield decision makers from choice overload and thus may function as a powerful explanation of when and why choice overload occurs.

Definition of noncompensatory decision strategies

In contrast to a compensatory decision strategy such as the weighted additive rule defined previously, a noncompensatory decision rule means that the decision is eventually made based on only one aspect or attribute such that the option that is highest on that single attribute is chosen. When a noncompensatory strategy is applied, an advantage on one attribute cannot compensate for a disadvantage on another attribute and no trade-offs are made (Gigerenzer et al., 1999). A prime example of a noncompensatory strategy is the so-called lexicographic decision rule in which the option that is best on the most important attribute is selected, irrespectively of the values of that option on other, less important attributes. Other examples of noncompensatory strategies are the satisficing rule (Simon, 1956) and the elimination-by-aspect rule (Tversky, 1972) outlined in Chapter I.

Interaction between noncompensatory strategies and environment

↓135

For assortments in which attributes are negatively correlated, simulations show that the outcomes of simple noncompensatory decision strategies deviate substantially from the outcome of a weighted additive rule that takes into account all the available information (Bettman et al., 1993). In extension to this, Fasolo, McClelland, and Todd (2007) showed via simulations that two conditions are necessary for a decision to become difficult: First, the structure of the choice environment has to be “unfriendly,” that is, operationalized as a high number of options that are nondominated and described on many attributes. Second, the decision maker has to value multiple attributes as equally or similarly important (e.g. aiming to find something that is cheap and has high quality). In contrast, if only one condition holds but not the other such that the environment is friendly or the decision maker regards only very few attributes as important (e.g. aiming to find something that is cheap, but not bothering much about quality), the choice will become easy with regard to the amount of information that has to be looked up. Fasolo et al. also show that in these latter cases, the outcome of the decision will closely resemble the outcome of a weighted additive decision strategy that takes into account all available information. Yet, as outlined previously, using a weighted additive rule as a normative yardstick is highly controversial.

Noncompensatory strategies can increase choice probability

Analogous to Fasolo et al.’s theoretical predictions, Dhar and Nowlis (1999) found that individuals were less likely to defer choice if they applied a noncompensatory decision strategy. In their experiment, people were given a choice between two options (apartments, microwave ovens, or automobiles) that were described on a number of nonalignable attributes and thus involved trade-offs. What they found was that participants who had to decide under time pressure were less likely to defer the choice as compared to a group that was not put under time pressure. For a control group that decided among two options that were not conflicting because one was better than the other on every attribute, no effect of time pressure was found. Dhar and Nowlis hypothesized that time pressure led participants to shift their decision strategy toward a less compensatory strategy in which they evaluated fewer attributes. As a consequence, individuals experienced fewer trade-offs and were more likely to choose. In a series of follow-up experiments in which Dhar and Nowlis also tracked participants’ information search processes, they found converging evidence in favor of their hypothesis.

Critical evaluation of noncompensatory strategies as a mediator

In Chapter I argued that finding choice overload could be interpreted as finding failure to adapt the decision strategy to the current situation. This statement can be rendered more precisely now by saying that choice overload will be more likely if decision makers try to apply an elaborate and compensatory strategy that requires them to take into account the full information available. To the degree that people shift toward a noncompensatory strategy, they should be less likely to be overloaded.

Initial screening as a potential mediator

↓136

Besides the use of noncompensatory decision strategies, there are also other heuristics that may moderate the effect of too much choice. One apparently simple heuristic to handle excessive assortments is to engage in an initial screening process in which options are sequentially eliminated based on a few, yet important aspects. (Davey, Olson, and Wallenius, 1994; Grether & Wilde, 1984; Tversky, 1972). For example in a study by Huber and Klein (1991), 75% of the participants who had to search for a new flat decided not to look at a full list with 100 choices but rather eliminated the worst options beforehand by placing strict cutoffs on attributes such as monthly rent or quality.

Likewise, Hauser and Wernerfelt (1990) argued that consumers do not consider all available options but rather a much smaller set. They reported data from a large consumer panel showing that across a large number of product categories, the number of brands that consumers consider seriously when making a purchase is seldom larger than six, with an median of about four. These data indicate that people efficiently narrowed down the number of options to form a manageable consideration set that they could then scrutinize in more detail.

In a similar fashion, the participants in the study by Lenton et al. (2005) avoided overly large numbers of potential mates on an Internet dating site, which could also be interpreted as an initial screening strategy. Screening out options based on very few pieces of information is a successful strategy for narrowing down an assortment to a manageable size. Thus, to the degree that people screen out options, they should be less affected by the initial size of the assortment.

Critical evaluation of initial screening as a mediator

↓137

While such an elimination or screening strategy is commonly regarded as an adaptive way to handle excessive assortments, ceteris paribus it leads to a situation in which the options in the reduced set become more similar as the initial set gets bigger. At the same time it can be shown via simulations and based on real-world examples that as a result of a thorough elimination process the attributes of the remaining options are likely to be conflicting, even if in the initial, unscreened set the correlations were positive (Fasolo et al., 2007). Thus, a presumably adaptive decision strategy might lead to an increased choice difficulty in large assortments as compared to small ones.

As mentioned in Chapter I, Botti and Iyengar (2006) argued that an initial screening comes with the risk that the best alternative might be unwittingly eliminated, which in turn should lead to dissatisfying outcomes. This is because by placing a strict cutoff on one of the attributes, one would eventually eliminate the best alternative on the other attribute. In real-world situations this seems to be less of a problem, though. At least according to Huber and Klein (1991), decision makers seem to be capable of adapting their strategy use accordingly. In their experiment they found that participants adopted less severe cutoffs when attributes were negatively correlated as compared to a choice from an assortment in which attributes were positively correlated.

Taken together, these results suggest that an initial screening seems to be a sensible heuristic to prevent choice overload. On the other hand, the argument that this initial screening amplifies trade-offs and thus leads to decision avoidance rests upon the assumption that a decision maker aims to maximize rather than to satisfice, an aspect that I already discussed and empirically tested as a separate moderator in Chapter III. Thus it seems that if anything, several factors have to interact before choice overload occurs, which would make it difficult to replicate the effect.

Hedonic editing and dominance as moderators

↓138

Yet another suggestion for how the too-much-choice effect might be moderated by individual decision strategies is based on the idea that people often do not decide unless they have identified a dominant option (Montgomery, 1983). As mentioned earlier in the present chapter, Gourville and Soman (2005) reported empirical evidence that the motivation to choose is higher if options only differ along a single, compensatory dimension such as price or size as compared to an assortment in which options differ on many different dimensions. With regard to Montgomery’s framework according to which people search for a dominant option, in the latter case people are less likely to choose because it is more difficult to identify such a dominant alternative.

Given that many assortments in the real world are characterized by negative attribute correlations and so a dominant option will usually not exist, Montgomery further assumed that such “unfriendly” environments are “edited” by the decision maker, for instance, by changing the subjective importance weights for the attributes or neglecting certain pieces of information (see Thaler & Johnson, 1990, for the related idea of hedonic editing). From this perspective, a decision maker makes a precommitment for one promising option early in the decision process and then searches for justification of this preliminary choice. According to Montgomery, another way of editing is to apply a noncompensatory decision rule similar to the case described by Dhar and Nowlis (1999), as set out in the present chapter above.

Critical evaluation of finding a dominance structure as a moderator

Under the assumption that no decision is made unless a dominant option is singled out, Montgomery’s framework predicts a too-much-choice effect for environments with similar options and negative attribute correlations in combination with a compensatory decision strategy because in such situations a dominant option is hard to find. The described environmental structure resembles those described by Fasolo et al. (2007) as a result of initial screening. Likewise, the idea of finding a dominance structure is strikingly similar to the notion of maximizing. Even though Montgomery’s theory makes a prediction of when and why the effect of too much choice will occur, the data I and others have collected so far do not allow us to test it empirically, mainly because precise data on the process according to which people search and decide are lacking.

↓139

To test to what extent decision strategies—such as searching for a dominance structure—and simple heuristics—such as initial screening or noncompensatory weighting of information—moderate the effect of too much choice, the search and decision strategies that people employ within a given situation have to be assessed. While tracking the search process in the music study was a first step in this direction, future studies should collect more detailed process information about individual information search and reaction time and possibly should also ask people about their decision strategies in a more qualitative approach. While each single method has its conceptual limits (e.g. see Nisbett and Wilson, 1977, for the limits of verbal reports about cognitive processes), a combination of different methods might eventually lead to a better understanding of the interaction between decision strategies and the number of options to choose from.

Common comparison standard as a moderator 

According to Cabanac (1992), comparing and trading off qualitatively different attributes relative to each other in a compensatory fashion requires a common value system (see also Sanfey, 2004). For example, Cabanac’s theory predicts that when trading off usability against design one would have to determine how much usability should be forgone for a given increase in design. In other words, for means of relative comparison, the decision maker would have to convert the values of more or less incommensurable attributes. Yet, such a common denominator may only exist in rudimentary form. Therefore, Cabanac assumed that the conversion into a common value system is probably somewhat error-prone and the reliability of the option comparison might be decreased. The reasoning is similar to Zhang and Fitzsimon’s (1999) line of argumentation outlined above saying that people are less satisfied when comparing options with nonalignable features because decision makers are insecure about how the features should be traded-off against each other.

Furthermore, the more options there are, the more similar they become and the higher the chance that a blurry comparison standard will change the preference rankings. If the goal is to maximize the outcomes and to find the best option this blurriness increases the risk of making a suboptimal choice. To avoid this risk, people may be more likely to defer the choice. In support of this hypothesis, Dhar (1997) found that people were less likely to defer a choice between two music tapes if they were instructed to assign a monetary value to each attribute (e.g. number of songs, quality of the recordings) as compared to a control group who did not receive specific instructions. Dhar argued that once all the attributes are mapped onto an unidimensional measure such as money, comparison and trade-offs are much easier, which would be in line with the predictions of Cabanac.

↓140

The hypothesis that decision difficulty increases with the similarity between options is also supported by the results of an experiment by Böckenholt, Albert, Aschenbrenner, and Schmalhofer (1991), who showed that decision makers searched for more information about possible vacation locations when options had small attribute differences (e.g. in temperature or number of rainy days) as compared to a situation in which the differences on those same attributes were large.

Critical evaluation of a common value system as moderator

The assumption of a common value system closely resembles the notion of utility as a universal currency that decision makers aim to maximize, a concept that has been criticized on several grounds (Brandstätter, Gigerenzer, & Hertwig, 2006; Gigerenzer, 2000). In addition, Cabanac’s theory would always predict an effect of too much choice when the number of options exceeds a certain degree of similarity. The degree of similarity between options differed substantially between the experiments that I reviewed and that did not find an effect of too much choice. Therefore, the degree of similarity seems not to be sufficient to explain why and when the effect of too much choice occurs and when it does not.

Furthermore, the “conversion” of attribute values into a common value system (Cabanac, 1992) is only necessary if a decision maker adopts a compensatory choice rule. From the perspective of noncompensatory decision rules such as satisficing or elimination-by-aspects, Byron (2005) pointed out that a common denominator is unnecessary because the decision maker is expected to choose the first option that exceeds his aspiration on each relevant attribute (see also Gigerenzer et al., 1999). For example, for a true satisficer, the decision would not depend on the conflict or the incommensurability between options on different attributes. In line with this, Simon (1956) pointed out that “we should be skeptical in postulating for humans, or other organisms, elaborate mechanisms for choosing among diverse needs” and that “common denominators among needs may simply not exist” (p. 137).

↓141

Finally, insofar as Cabanac’s model implies that the similarity between options increases choice difficulty, it somewhat contradicts the prediction by Kahn and Lehmann (1991), who stated that similarity between options leads to a decrease in variety and by this lowers choice difficulty. These discrepancies are hard to resolve unless a precise model of the decision-making process is spelled out. As such, the results once again stress the importance of widening the perspective by also incorporating decision processes and their interactions with environmental structures.

Search costs as a mediator

As mentioned above, whether a choice is difficult or demotivating depends on the interaction between the assortment structure that the decision maker faces and the strategy that he or she applies to make the choice. So far, I have mainly differentiated decision strategies by the amount of information they require and how that information is combined. In the following, I will focus on yet another important aspect of decision strategies, namely, the “costs” that are required to carry out a certain strategy. Measures of decision costs set a price tag on the amount of time and effort devoted to searching information and also on mental processes such as calculations or comparisons. As I will lay out in more detail below, incorporating decision costs might explain when and why a too-much-choice effect occurs.

Information search and costs

Heuristic models of search as well as Simon’s notion of satisficing explicitly link information search to costs. For example it has been shown that if information is costly, people are more likely to use a simple heuristic (Rieskamp & Hoffrage, 2006) and they search for less information (Brannon & Gorman, 2002). Payne et al. (1992) developed a measure of computational cost, which they called “elementary information processing” (EIP) units, that aimed to quantify the cognitive effort required to carry out a certain decision strategy. Another attempt to quantify cognitive effort goes back to Shugan (1980), who developed a measure for what he called the “cost of thinking” that basically reflects the expected number of pair-wise comparisons of options and their attributes that a decision strategy requires in a given environment in order to reach a choice.

Optimal search

↓142

As indicated by the results of the music study, the amount of search often increases with the number of options. Trading off more search costs against the benefits of eventually having a better option seems worthwhile as long as the marginal costs of search are smaller than the expected marginal increase in quality. Thus, if choosers continue to search, at some point their costs will exceed the benefits and the net gain might be negative.

In a theoretical analysis of search, Stigler (1961) found that when searching within an assortment of consumer goods, the probability of finding an option that is cheaper (and thus a better deal) than the already examined options decreases with every additional option that is sampled. In his model, Stigler assumed that the decision maker sequentially obtains price calls and that he or she can go back at any time to the cheapest call encountered so far. Based on these assumptions, the relationship between the money saved and the amount of search is monotonically increasing but negatively accelerated. In other words, the additional benefits of search get smaller the longer the search lasts. From Stigler’s model it follows that the benefits of further search will be greater if the distribution of prices widens (such that extreme prices become more likely) but that even for very wide distributions, the benefits of further search are marginally decreasing. Thus, without any search costs, more search will eventually lead to a better outcome and thus more options should always be welcome; even more so if the choice set is heterogeneous.

Because for most cases there is no closed-form function for the relationship between search and the expected return, it can only be approximated (Stigler, 1961). Figure 12 exemplifies this relationship based on the results of a bootstrap simulation run in Matlab 7.0 with 1,000 draws per data point. In this simulation, the distribution of prices in the assortment is assumed to be normal with a mean price of zero. Different colors represent different standard deviations of price, denoted as sd.

↓143

Figure 12: Relationship between the number of searched options on the abscissa and the outcome of the search on the ordinate, expressed as the amount of money saved relative to the mean price within the assortment. 

So far, the framework does not incorporate search costs. Under the assumption that searching and evaluating an additional option comes with a fixed cost c, the net outcome of the search process O would be

(8)

↓144

where n is the number of options searched and F(n) is the quality function depicted in Figure 12. If costs increase linearly while quality is negatively accelerated with search, the net outcome will eventually be negative depending on the search costs and the distribution of prices. Figure 13 illustrates this relationship. In the example, again price is assumed to be normally distributed with a mean of zero and a standard deviation of 2.0 ,and the different colors represent different search costs.

Figure 13: Expected net outcome (amount saved minus search costs) depending on varying search costs c and the number of options searched. 

Simple heuristics for search

Given the mathematically complex calculation needed to determine the optimal amount of search and the fact that this amount also depends on the quality distribution, which in many cases might be unknown to the decision maker, it can hardly be said that Stigler’s model of search is a reasonable standard for human decision making in real-world situations. To overcome this problem, researchers in the past developed simple heuristics that aim to describe actual search behavior also for those cases where the exact distribution of options is unknown. When cognitive limitations are taken into account, it has been shown analytically as well as experimentally that simple heuristics of search can do reasonably well across many environments (Butler & Loomes, 1997; Dudey & Todd, 2002; Hey, 1980, 1982; Hutchinson & Halupka, 2004). One example among many possible search heuristics is the so-called “one bounce” rule, according to which decision makers should examine at least two options and then stop their search as soon as the last option encountered is worse in quality that the best one examined so far (Hey, 1982).

Psychological costs of search

↓145

Independent of the distribution of options and the actual decision strategy used, Stigler’s model and its illustration (Figure 13) exemplify an important aspect with regard to choice overload, namely, that if search is costly, too much search will eventually result in a net loss. In addition, if the overall post-choice satisfaction with a chosen option is a function of the benefits of the chosen option minus the search costs invested to find this option, the relationship between satisfaction and the amount of search would in principle resemble the function depicted in Figure 13—that if the search costs outweigh the benefits, the satisfaction with the choice will be low (Schwartz, 2000). From this perspective, a too-much-choice effect would occur if an increase in the number of options boosts the decision costs faster than it increases the benefits, for example, by luring people into too much search.

Do people search too much?

Yet, in a summary of several empirical studies on search, Dudey and Todd (2002) as well as Zwick, Rapoport, Lo, and Muthukrishnan (2003) concluded that in most cases, subjects stopped their search earlier than prescribed by the respective theories. Also, in a consumer context, Marmorstein, Grewal, and Fishe (1992) stated that the amount of prepurchase search undertaken by buyers of durable goods is surprisingly low across several empirical studies. In resemblance to Simon’s analogy of a pair of scissors, Zwick et al. further argued that whether people search too much or too little relative to an optimal policy depends on the structure of the assortment as well as the heuristic rule an individual applies—and, one might add, it also depends how one defines an optimal search strategy.

In an empirical study, Zwick et al. (2003) tested the effect of search costs and of the total number of options in an assortment on search behavior. In line with my results from the music study, Zwick et al. found that people searched more if the assortment size increased. They further found that people searched less if search costs increased, which is in line with findings from a comparable experiment conducted by Brannon and Gorman (2002). Together, these results support the hypothesis of humans as adaptive decision makers. Yet even though participants adapted to the costs, Zwick et al. found that if search costs were low, most participants tended to search too little and according to Zwick et al’s analysis , on average, they could have been more successful in their search if they had examined more options. On the other hand, when search was costly, most participants could have done better by searching less. More importantly, the number of options did not lure participants into searching too much or too little. There was an interaction effect solely between the number of options and search costs such that participants had the worst search results when there were many options and search costs were high.

Critical evaluation of the search cost hypothesis 

↓146

With regard to the too-much-choice effect, it seems that a mere increase in the number of options does not necessarily lead people into searching too much. As outlined above, participants across many studies tend to search too little rather than too much. People’s search strategies seem to adapt to the environmental structure such that they search less when the cost of search increases, which should further shield them from searching too much. As mentioned in chapter I, scholars have also argued that a large set of options actually reduces the time and effort needed to reach a decision, and thereby also the search costs (Hutchinson, 2005; Kahn, 1995; Simonson, 1990).

Despite this, even if people who are confronted with a large assortment search too much and thus suffer from increased costs, they should still make a choice as long as the options generate positive outcomes to cover at least some of their losses due to search costs. Thus a search cost model cannot easily explain choice omission or a decrease in motivation to choose. Despite this shortcoming, in theory the model could still explain a decrease in satisfaction with the chosen option.

However, in previous studies on too much choice, there were no explicit search costs and whatever costs occurred must have been nonmonetary, such as time spent or cognitive effort invested. Estimating these costs would be purely speculative and as a consequence, with the data on hand, the search cost hypothesis cannot be fully evaluated. Testing the theory of search costs in real-world environments is further complicated by the fact that individuals search for many different reasons. In a consumer context, they might just enjoy shopping (Marmorstein et al., 1992). Across several contexts, they might value the information acquired in order to gain expertise within a given environment or simply try to satisfy their curiosity (Brannon & Gorman, 2002). Likewise, the time spent searching might be valued more or less. These latter explanations would also match the finding that people sometimes gather more information than necessary before making a choice (Bastardi & Shafir, 1998).

↓147

In addition, it is not at all clear that decision makers would indeed incorporate search costs in their final satisfaction rating with the chosen option in a way that more search leads to lower satisfaction. As mentioned in Chapter I, other psychological theories suggest that it might well be the other way round. According to Festinger’s (1957) theory of cognitive dissonance, people will boost their liking of a chosen option if they invested a lot of effort in finding it, in order to justify their decision. Likewise, for animals, Kacelnik and Marsh (2002) found that starlings prefer the kind of food that had required an increased effort to obtain in the past. Their study consisted of two stages. In the first no-choice stage, the birds had to either fly a short (4 meter) or a long (16 meter) distance in order to get access to pecking keys that differed in color depending on the length of the distance. Pecking the keys resulted in identical food rewards. At a second stage, the birds got a free choice between the two differently colored keys. At this second stage, most birds pecked the key that was associated to the long flying distance. In resemblance to these results, in the music study outlined in Chapter III, people who searched more were slightly more satisfied with the finally chosen option.

(Mal) adaptive aspiration level as a moderator

As noted at several places throughout the dissertation on hand, the notion of satisficing assumes that people choose the first option that exceeds their aspiration level. This decision strategy implies that no choice would be made if none of the options surpasses the threshold. Thus, if people strictly follow a satisficing strategy, whether a choice is made or not would depend on the extent of their aspirations. Following up on Stigler’s search model, when search costs are low, better options can be expected if the assortment is large, especially if the distribution of options is wide. From the perspective of an adaptive decision maker, in such a situation it would be sensible to increase the aspiration level. In contrast, if the assortment is small or options are similar to each other, the expectations should be lowered. This principle would be in line with Simon’s (1955) conclusions. Simon hypothesized that as “the individual, in his exploration of alternatives, finds it easy to discover satisfactory alternatives, his aspiration level rises; as he finds it difficult to discover satisfactory alternatives, his aspiration level falls” (p. 111). As a consequence of this change in aspiration, Simon further conjectured, the consideration set narrows if satisfactory alternatives are discovered easily, and vice versa.

As I will outline in more detail below, if decision makers assume that they are in an environment with large variance whereas in fact they are choosing from an assortment with small variance, the probability that their aspirations will be met will decrease with an increase in assortment size and thus a too-much-choice effect would occur.

A thought experiment

↓148

As an example, imagine two parents, Ms. O and Ms. U. Every Saturday, they send their children to the market to buy an apple. In the city where Ms. O lives, there are two markets: one small market with 6 apple stands and one large market with 30 apple stands. Ms. O knows that in his city, the price of the apples varies a lot between the stands. The average price of an apple is 1 euro and the standard deviation of the apple prices is 20 cent across stands. As Ms. O is very low on money, he only wants to give as much money to his children as they will need to buy an apple on 95 of 100 Saturdays that he sends them to the market. Over the years, he figured out that for the small market, it is sufficient to give 95 cents to his children so that they can buy an apple 95% of the time6. When he sends his children to the large market, he only needs to give them 74 cents.

In the city where Ms. U lives, there are also two markets with 6 and 30 stands, and the average price for an apple is also 1 euro, but with a standard deviation of 5 cents across stands, so the difference between the prices is much smaller. When Ms. U sends her children to the small market, they need 99 cents to return with an apple on 95% of the days, only slightly more than the children of Ms. O. However, when Ms. U sends her children to the large market, they still need 94 cents to buy an apple, much more than the children of Ms. O.

One day, Ms. O and his family move to the city where Ms. U lives. As he did before, every Saturday Ms O sends his children to the market. Because he does not know about the distribution of the prices, he assumes that they are the same as in his old city. But to his surprise, with the money he gives them in the new city his children bring home an apple less often, and especially so when returning from the large market. From the small market, they return an apple on 63 of 100 days but from the large market they almost never—only once in 100—days, bring an apple home.

Critical evaluation of (mal) adaptive aspiration levels as a moderator

↓149

As the example shows, a mal-adapted aspiration level can lead to a decrease in the probability of choice due to an increase in assortment size. Schwartz (2004) conjectured that large assortments lead to an increase in expectations. As outlined above, in general, increasing the aspiration level (which in the example means lowering the price one is willing to pay for an apple) with an increase in the assortment size seems a sensible thing to do, yet the magnitude of this increase needs to be adapted to the structure of the environment, namely, the variance of the options.

If people overestimate the variance, for example, by assuming large differences between options whereas in fact, all options are more or less the same, they will overplay their expectations in the face of an increase in assortment size and this would result in an effect of too much choice. Note that if people underestimate the variance, this framework would predict a reversed too-much-choice effect.

However, while the present model of a changing aspiration level makes explicit and testable predictions, it is not well suited to explain the results of past experiments on too much choice, including my own. That is because it assumes that the aspiration level is determined prior to making a choice and that it cannot change during the search process. In contrast, most of the experiments outlined above were set up to reduce the influence of prior preferences and domain-specific knowledge, which makes it unlikely that decision makers had strong aspirations prior to choice.

Effort invested in choosing as a moderator

↓150

At least in the laboratory studies that I conducted, most participants were highly educated university students in their mid 20s. Also, in the Berlin lab as well as in the Bloomington lab many other experiments require high cognitive skills. Therefore participants may have been expecting a challenging task in the choice experiments and were motivated to invest more time and effort. In line with this, independent of the assortment size, the majority of participants in all lab experiments eventually chose an option. If the too-much-choice effect only occurs for cases in which decision makers are unable or unwilling to put enough effort into making a choice, this could be a potential moderator.

Critical evaluation of effort as a moderator

The hypothesis outlined above rests on the assumption that participants in experiments that show choice overload phenomena were unmotivated. As to my knowledge there is no data to support this claim, the assumption remains speculative. Moreover, Malhotra (1982) provided evidence that at least information overload can hardly be found in studies based on trivial decisions such as selecting rice or peanut butter. From this perspective, choice overload would instead be expected for important and far-reaching decisions. Clearly, any theory on effort needs to be rendered much more precisely before it can be fully evaluated. This is especially important because the concept of effort is somewhat related to the concept of choice motivation, which is often used as a dependent measure, and thus there is a danger of circular reasoning.

Individual differences as a moderator

In the preceding chapters, I discussed several individual differences that might moderate the effect of too much choice. Especially in the music experiment that followed a within-subject design, I explicitly tested for individual differences in domain-specific expertise, cultural differences, search behavior, and the personality construct of maximizing versus satisficing. Yet, none of the proposed variables explained the variance in the individual propensity to be overloaded with choice. Therefore, in the following I will extend the discussion of how individual differences might moderate the effect of too much choice.

Variety seeking

↓151

People seem to have a tendency to seek variety even if it requires them to choose less-preferred options (Ariely & Levav, 2000; Ratner et al., 1999). As variety seeking often leads to choosing something exotic or unique, it has been argued that it is due to people’s desire to communicate individuality (Kim & Drolet, 2003). Another possibility might be that people take the opportunity to explore their environment in order to gain new experiences. In any case, a large assortment offers more opportunities for variety seeking and thus may invite decision makers to try something new and thereby put up with a decrease in satisfaction. If so, for people who like to experience new things and who value variety, a large assortment should be more inviting to make a choice, as it promises to reveal something special.

Ability to deal with cognitive complexity

Beyond satisficing versus maximizing, the degree to which people are affected by the size of an assortment could also depend on their ability to deal with cognitive complexity (Bieri, 1966), a measure that reflects how much information someone is willing to process prior to making a decision. The construct of differences in cognitive complexity is in turn closely linked to the need for cognition (Cacioppo & Petty, 1982) and the intolerance of ambiguity (Frenkel-Brunswick, 1949). Other related concepts distinguish people based on their propensity to make or to avoid decisions (Beattie et al., 1994; Hanoch et al., 2006) or their tendency to procrastinate (Ferrari, Johnson, & McCown, 1995; Mann, Burnett, Radford, & Ford, 1997).

Depression

Indecisiveness and the tendency to prolong information search has also been linked to depression because depressed decision makers have been found to use less heuristic processing and have difficulties committing themselves to a specific decision. For example, in a study by Lewicka (1997), depressed participants searched for more information about a job candidate before they reached a decision. Because of this difference in the search process, depressed participants ended up with more evenly spread knowledge about available options. Probably because of this, depressed participants also rated the second-best, nonchosen candidate as almost equally attractive as the candidate they chose. Nondepressed participants, on the other hand, at some point tended to search for confirming evidence about the most promising candidate and rated the finally chosen candidate as much more attractive than the second-best one. In line with these results, Lyubomirsky and Ross (1999) found that chronically unhappy people were more vulnerable to post-decisional dissonance and disappointment.

Critical evaluation of individual differences as a moderator

↓152

With regard to the data that I collected, there is no particular reason why participants in studies that revealed an effect of too much choice should differ systematically on any of these dimensions from participants in studies where the effect was not found. Of course, it can never be ruled out that a certain attitude or personality is a necessary precondition of the too-much-choice effect but as long as there is no sound theory about the decision processes that lead to the effect of too much choice, the influence of individual differences remains speculative.

A closer look at choice motivation as dependent variable 

The too-much-choice effect describes the situation in which an overly large assortment decreases the motivation to make a decision and can eventually lead to no choice for the time being. According to Anderson (2003), making no choice in itself is not a well-defined dependent variable but rather an umbrella term that embraces different phenomena that require different explanations.

Making no choice can be to the result of a preference for having no change, that is the status quo (Johnson & Goldstein, 2003; Ritov & Baron, 1990, 1992). Also, no choice will be made if the decision maker procrastinates, for instance, in order to search for more information (Tversky & Shafir, 1992). In this second case, a choice might be made at a later point in time. However, once delayed, many things never get done (Ariely & Wertenbroch, 2002). These types of no-choice responses require that alternative options be recognized as such, and that the possibility of making a choice is at least considered. In this sense, not choosing can be seen as the result of a more or less deliberate decision process that could be consistent with a decision maker’s intentions.

↓153

But no choice can also be made if the possibility of choice as well as the alternative options are not even considered in the first place. In this case, not to choose does not result from a decision process but from the lack thereof. As an example, one might think of a person who passes by a tasting booth full of jam without realizing that the jam may actually be purchased.

With regard to past studies on too much choice, including my own, making no choice could be interpreted in different ways. In experiments such as my studies on restaurants and charity organizations, but also in Iyengar and Lepper’s (2000) chocolate study, people were forced to make a choice among several options and choosing a default such as money was one option among many. Thus, a deliberate decision process can be assumed. The same holds for the music study where the amount of search prior to choice indicated participants’ deliberate process of actually collecting information and considering several options. In the field studies on jam and wine and to some extent also the lab study on jelly beans, a deliberate process and the sense of having different options is also likely. Yet it may be that people did not consider any option at all and thus did not even enter into a decision-making process or that they procrastinated in making their choice.

As the reasons for no choice could differ significantly depending on the situation, a better understanding of choice overload will be gained by clarifying what people are actually doing if they do not make a choice. The different ways of making no choice also have important implications for cognitive models of the too-much-choice effect (see also Veinott, Jessup, Todd, & Busemeyer, 2006). To get a better understanding of the effect, future studies need to be more explicit in their definition of the dependent variable.

Final conclusion

↓154

In his 2004 book The Paradox of Choice, Barry Schwartz wrote: “As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize” (p. 2). On the other hand, Anderson (2006) as well as Postrel (2005) cherish the overabundance of choice as a liberating force that enables individuality and pluralism and that leads to more efficient markets. Also, the research on adaptive decision making provides strong evidence that people have a wide repertoire of choice strategies that they can employ depending on the situation. From this perspective, having many options to choose from does not automatically lead to choice overload. After all, people adapt to choice; they satisfice and they deliberately limit their choices all the time, for instance, by applying a filter, consulting an expert, or reading Consumer Reports. As noted by Schwartz (2004): “A small-town resident who visits Manhattan in overwhelmed by all that is going on. A New Yorker, thoroughly adapted to the city’s hyperstimulation, is oblivious to it.” This latter perspective on choice overload is in line with my empirical findings showing that the effect of too much choice is much less robust than previously thought.

Foreshadowing these challenges in replicating the effect, even though Buridan's hypothesis about choice-overloaded animals also seemed convincingly universal, it could never be supported on empirical grounds. For example, hungry rats that were placed an equal distance between two food patches quickly moved to one patch or the other and showed no tendency to hesitate or vacillate (Klebanoff 1939, cited in Miller 1944).

Yet for the effect of choice overload, the odds for future replications are somewhat better. At least the meta-analytical integration of several studies outlined in Chapter IV shows that the effect of too much choice is real and that there must be certain boundary conditions that explain the differences in its occurrence. While almost none of the variables that I tested experimentally seemed to matter, there are still a number of potential moderators and mediators that remain to be tested that might explain the differences.

↓155

From the review of these boundary conditions no finite conclusions on the exact nature of these moderators can be drawn. What can be concluded, however, is that looking solely at the structure of the environment only provides a distorted view of such a complex phenomenon as choice overload. In resemblance to Simon’s (1990) notion of the scissors, whatever the explanation looks like, it has to incorporate the interaction between the structure of the environment and the properties of the decision maker who acts within that environment.

Toward this goal, future research should proceed by building a more precise understanding of the psychological processes and decision mechanisms that people use, the environment structures they face, and the interaction between the two (Todd & Gigerenzer, 2007). Finally, researchers should be precise regarding their dependent variable, be it different forms of making no choice or measures of reduced choice satisfaction.


Fußnoten und Endnoten

6  The values in this thought experiment stem from Monte Carlo simulations based on Stigler’s (1961) mathematical functions of search costs, carried out in Matlab 7.0



© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.
DiML DTD Version 4.0Zertifizierter Dokumentenserver
der Humboldt-Universität zu Berlin
HTML-Version erstellt am:
24.09.2008