Introduction

↓1

Quantitative estimation is an important task that people have to master in their daily lives, such as estimating the travel time for a journey, the risk of medical treatment, or the quality of a job applicant. For this task people rely on several diverse mechanisms. For instance, numerical estimates can be directly retrieved from memory, reconstructed, for example, from landmark dates in temporal estimation (Friedman, 1993, 2004) or estimated from rates of behavioral frequency (Conrad, Brown, & Cashman, 1998). In my dissertation, I focus on a further mechanism of quantitative estimation: estimation from probabilistic information.

To estimate a quantity of interest, people can rely on multiple sources of information, for instance, cues, which are probabilistically related to the criterion, that is, the quantity being estimated. For instance, to estimate the selling price of a house, people could rely on information, such as the house size, the quality of the neighborhood, or if it has a swimming pool. A variety of cognitive models has been proposed to describe the cognitive processes involved in quantitative estimations, with the purpose to clarify which information people rely on and how they use and integrate multiple pieces of information. Traditionally, linear models, such as multiple linear regressions, have been the model of choice dominating the literature on multiple cue judgments (Hammond & Stewart, 2001; Brehmer, 1994). However, recently, linear regression approaches have been criticized, and the need for more cognitively oriented models postulated (Gigerenzer & Todd, 1999). Since then, several alternative models have been proposed (Juslin, Karlsson, & Olsson, in press; Hertwig, Hoffrage & Martignon, 1999). In my dissertation, I propose a new cognitive theory for estimation from multiple cues, and test it against established models of estimation as well as newly proposed models.

The Traditional Approach to Estimations: Social Judgment Theory

Following the work of Egon Brunswik (1952) and Ken Hammond (1955), multiple linear regression became the dominant model to describe multiple cue judgments (Brehmer & Brehmer, 1988; Brehmer, 1994). Following from this seminal work “social judgment theory” was established (for an overview, see Doherty & Kurz, 1996). According to social judgment theory, human estimation follows a linear additive strategy that can be captured by a regression model (Doherty & Brehmer, 1997). The linear additive approach assumes that first each cue is weighted according to its importance. Then, an estimate is reached by adding up the weighted cue values (Cooksey, 1996). Optimal cue weights are found analytically by minimizing the squared deviation between the estimated quantity and the estimation (Cohen, Cohen, West & Aiken, 2003).

↓2

Since their introduction to judgment research in the 1950s, regression models have been employed to model judgment policies in many domains, reaching from predicting teachers’ evaluations (Cooksey, Freebody, & Davidson, 1986), medical decisions (Wigton, 1996), analyzing psychiatrists’ diagnostic strategies (Harries & Harries, 2001), or modeling the bailing policies of judges (Ebbesen & Konecni, 1975). Linear additive models have also been very influential in other areas of psychology; prominent examples include, among others, Anderson’s (1981) “information integration theory,” or the work of Fishbein and Ajzen (1980) on the impact of attitudes and social norms on behavior. However, despite the success of linear models in describing the outcome of a cognitive process (i.e., the final estimation), they have been criticized for not capturing the process itself (Brehmer, 1994; Einhorn, Kleinmuntz, & Kleinmuntz, 1979; Hoffman, 1960; for a review, see Doherty & Brehmer, 1997).

The Exemplar-Based Approach to Estimation 

Recently, exemplar models have been suggested as alternative models for explaining human estimation processes. Exemplar models have been successful in modeling the cognitive process underlying categorizations (Nosofsky & Johansson, 2000; Kruschke, 1992). Due to this success, they recently have been considered as models of estimations (Juslin, Olsson, & Olsson, 2003; Juslin et al., in press). Exemplar models assume that encountered objects are stored in memory and retrieved if a new object is evaluated. The estimation is based on a judgment of similarity between the object under evaluation and the exemplars stored in memory. For instance, a professor evaluating the success of a prospective graduate student might think about former graduate students and estimate the success of the prospective student based on the similarity to the former students.

The more similar an exemplar is to the object under evaluation, the stronger its impact on the estimation. The final estimate is given by the average of the criterion values of all stored exemplars, weighted by their similarities to the object under evaluation. Similarity is conceptualized as cue or feature based, that is, objects are described by their values on a list of features. Two objects are considered similar if their values on the features match, however, the features can differ in their importance for the similarity evaluation, for instance, two objects matching on all but one feature can still be considered as different if this cue is of central importance. Similarly, a mismatch on a feature can be negligible if this feature is of minor importance. The overall evaluation of similarity is reached by integrating all features based on the context model (Medin & Schaffer, 1978).

↓3

Juslin et al. (in press) argued that people’s estimation processes can be best captured by exemplar models in nonlinear environments, that is, when the cues are nonlinearly connected with the criterion. Consistent with this argument, they showed in several experiments that the exemplar model was better suited than the linear additive rule in predicting estimations when the criterion was a multiplicative function of the cues. Thus, the exemplar model seems to be a valid model for quantitative estimation.

Heuristic Approach to Estimations

A further recent approach to decision making comes from the literature on heuristics. In decision tasks, such as in paired comparison tasks, simple heuristics like Take The Best (Gigerenzer & Goldstein, 1996) have successfully been employed to model the decision process. Especially in complex decision situations and under time pressure, the simple heuristics were better suited to describe behavior than more complicated models based on optimization procedures (Rieskamp, 2006; Rieskamp & Hoffrage, in press; Rieskamp & Otto, 2006; Bröder, 2000; Bröder & Schiffer, 2003). This research indicates that, in many real-life situations, simple heuristics can predict human behavior well. In a similar vein, Hertwig et al. (1999) proposed a heuristic for estimation, QuickEst. QuickEst is a noncompensatory model, that is, it does not integrate information, but bases its estimation on only one cue. The cue on which the estimation is based is found by sequentially searching through all available cues. Once a cue fulfils a previously set criterion, search is stopped and an estimation is made. Although QuickEst makes accurate estimations in environments with a skewed criterion distribution, so far there is no evidence that it can model human estimation processes (Hausmann, Läge, Pohl, & Bröder, 2007). This leaves the question, if estimation processes can be modeled by simple heuristics, how can a heuristic model of quantitative estimations be devised, and in which conditions can it describe human behavior?

A New Cognitive Theory for Quantitative Estimations from Multiple Cues:
The Mapping Model 

The goal of my dissertation work was to develop a simple cognitive theory that would capture the cognitive process underlying quantitative estimations. Inspired by the research of Brown and Siegler (1993), I developed the mapping model and tested it against established models of estimation. Although Brown and Siegler provide a comprehensive framework of quantitative estimation, they do not offer a computational model of the estimation process. Thus, the goal was to develop a computational model that is consistent with the framework of Brown and Siegler.

↓4

The Mapping Model

Brown and Siegler postulated that two types of knowledge are necessary to make an estimation. First, knowledge about the mapping properties of the objects is required. This knowledge reflects the ordinal relation among objects, that is, how high an object ranks on the criterion of interest, compared to the other objects. Second, knowledge about the metric properties of the criterion is necessary, such as the distribution, the mean, or the range. The mapping model describes how knowledge about the mapping properties of an object is linked to the metric properties of the criterion in the estimation process. In a first step, the mapping models use the cue information to capture the mapping properties of an object. Objects are grouped together according to their cue sums, inferring the ordinal relations of the objects from the number of positive cue values. Second, to represent the metric properties of the criterion, a typical criterion value is derived for each category by considering the criterion values of the objects falling into the same category.

The mapping model only uses binary cue information so that each cue can have either a positive or a negative value. Cues are coded so that they are positively correlated with the criterion. To group the objects together, the mapping model makes the simplifying assumption that all cues are equally important, thus, all objects that share the same number of positive cue values are put into the same category. In the second step, the mapping model derives a typical criterion value for each of the cue sum categories, represented by the median criterion value of the objects in the same cue sum category. To evaluate a new object, the mapping model computes its cue sum and estimates the typical criterion value corresponding to the cue sum category.

Dissertation Outline

↓5

In my dissertation, I propose the mapping model as a new model for quantitative estimation, and test it in several experimental studies against other competing theories of estimation. This dissertation is structured into three chapters that are based on three manuscripts.

The first chapter, The Mapping Model: A Heuristic for Quantitative Estimation, focuses on the theoretical foundations for the mapping model. Past research had focused on linear regression as the predominant model to analyze quantitative estimations. However, recently, regression models were criticized because they do not describe the cognitive process underlying estimation (Hoffman, 1960; Gigerenzer & Todd, 1999), and the need for more process-oriented models was voiced (Payne, Bettman, & Johnson, 1993; Gigerenzer & Todd, 1999). In response to this criticism, several alternative models were proposed to capture human estimation (e.g., Juslin et al., 2003). Thus, the goal of the first chapter was to derive a model that can not only capture the outcome of estimations but also provides a plausible description of the cognitive process. Furthermore, the model would need to compete with alternative approaches to quantitative estimation.

The framework for quantitative estimation by Brown & Siegler (1993) presents a plausible and comprehensive account of estimation processes. However, it lacks the precise formulation of a computational approach to quantitative estimation. Thus, the aim was to provide a computational model that can be integrated into the framework of Brown and Siegler (1993), and test it rigorously against current models of estimation: a linear additive model, an exemplar model (Juslin et al., 2003), and the heuristic QuickEst (Hertwig et al., 1999) in varying task environments.

↓6

In the second chapter, Models of Quantitative Estimations: Rule-based and Exemplar-Based Processes Compared, I focus on a comparison of the exemplar model and the mapping model. In this chapter, I follow up on some open questions in Chapter 1. First, the exemplar model and the mapping model both provide an account for estimation processes in situations in which linear additive strategies are less successful. However, the models assume quite different estimation processes. While the exemplar model proposes an implicit similarity-based process, the mapping model assumes a rule-based estimation process. Thus, one goal of the second chapter was to clarify under which conditions the two models describe human estimation. More specifically, I investigated the role of two cognitive components which are essential for the assumptions that the models make about the estimation process: exemplar memory and knowledge abstraction. I examined which task features affect these components and thus could be responsible for a shift from rule- to exemplar-based processing. Secondly, in the first chapter, I concentrated on quantitative measures to compare the models. In the second chapter, my goal was to devise and include a qualitative test that would allow for the models’ assumption to be tested more directly. By constructing situations where due to the assumptions about the estimation processes they make qualitatively different predictions, I provided a more rigorous test of the models assumptions.

The third chapter, Predicting Sentencing for Low-Level Crimes: A Cognitive Modeling Approach, presents an application of the mapping model to a real-world problem. In the previous chapters, the mapping model was exclusively tested on laboratory data. However, if the mapping model aims to provide a plausible account of estimation, it also needs to perform well on real data. Sentencing decisions provide an interesting application, as they are a common estimation problem, but resemble the laboratory tasks in several ways. In sentencing a continuous criterion, the magnitude of the sentence has to be determined on the basis of multiple cues, the characteristics of the offense and the offender. In addition, sentences often have a highly skewed distribution, and thus offer an especially interesting task because the mapping model performed well in a similar environment in the laboratory. Moreover, in the legal domain, a recent discussion has raised the question of, how far can legal decision makers abide the law (Dhami & Ayton, 2001; Gigerenzer, 2006). This question is highly relevant because sentencing decisions provide highly complex material, and are often made under time pressure, making it probable that legal decision makers deviate from the rather complex legal regulations. Here, the mapping model could make a contribution by highlighting the importance of the cognitive process for decision making.


© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.
DiML DTD Version 4.0Zertifizierter Dokumentenserver
der Humboldt-Universität zu Berlin
HTML-Version erstellt am:
07.02.2008