[page 131↓]

6  Conclusions

The general question motivating this thesis has been how task instructions are transformed into effective task sets that control instructed behavior. Although most researchers would probably agree that experimental instructions somehow determine how task sets are configured and therefore are important for the outcome of an experiment, relatively little is known about how exactly task instructions are compiled into task representations that are used to control behavior. The focus of this thesis has been on how the specific response labels used in the verbal instructions affect response coding in two-choice tasks involving spatially organized responses (i.e., left-right keypress responses). More specifically, the main question of this thesis has been whether the contents of response instructions directly determine how manual keypress responses are coded and accessed.

A promising way of addressing this question is to study the impact of response instructions on compatibility effects. This is so because compatibility effects are typically attributed to response priming that arises as a consequence of a match viz. mismatch between stimulus and response codes, or between response codes on two concurrently performed tasks. Consequently, investigating which match relations lead to compatibility effects under different instruction conditions allows conclusions about the cognitive codes that are used to control responding.

In Chapter 2, three theoretical positions have been discussed that differ regarding their assumptions on whether and how instructions affect coding of spatially organized responses, and hence with respect to their predictions concerning the nature and size of the compatibility effects under different response instructions.

According to the spatial coding hypothesis (e.g., De Jong et al., 1994), (response) instructions merely constrain how relevant stimulus attributes are mapped and translated to responses, without affecting response coding per se. Rather, this view assumes that responses are coded in terms of relative (i.e., left-right) key location whenever the spatial dimension allows discriminating between responses. Consequently, instruction-independent spatial compatibility effects of normal size and direction should be observed whenever response-overlapping spatial information is present or activated. Other than spatial compatibility effects should not occur as a function of overlap between task-irrelevant stimulus or concurrent response attributes that overlap with the instructed non-spatial response dimension. Rather, non-[page 132↓]spatial compatibility effects should be restricted to the relevant S‑R dimension, that is, they should be attributable to translation efficiency (i.e., some intermediate translation stage) in the conditional route.

This view has been contrasted with the direct coding hypothesis, which assumes that response labels directly influence response coding. According to this view, response labels activate their corresponding concepts that become included in the response representations and can be used to control responding. Because response-overlapping stimuli (or responses) are assumed to directly activate their corresponding responses, the direct coding hypothesis predicts compatibility effects resulting from overlap with the instructed response dimension, even when the instructed response dimension is non-spatial and the response-overlapping attribute is task-irrelevant.

Two versions of such a direct coding hypothesis have been distinguished with respect to spatially organized keypress responses. According to the weak version, as, for instance, represented by the DO model (e.g., Kornblum et al., 1990), top-down control of response coding is restricted. That is, instructed (non-spatial) codes cannot be weighed more strongly than uninstructed (spatial) codes. Consequently, this view makes similar predictions as the spatial coding hypothesis regarding spatial compatibility effects. More specifically, the weak direct coding hypothesis predicts that spatial compatibility effects are largely unaffected by (non-spatial) response instructions.

In contrast, according to the strong version of the direct coding hypothesis, the specific motor codes that are needed to perform the instructed response might primarily be accessible via the mental representation activated by the response label. According to this view that seems consistent with the intentional feature weighing hypothesis, (e.g., Hommel et al., 2001), it is primarily intended (instructed) action goals that are assigned and linked to attended features of stimuli. Hence, instructed (intended) stimuli and response features (that can be relatively abstract and non-spatial) are weighed more strongly than are irrelevant features, although the latter may still be part of the action representations. Accordingly, only the strong version of the direct coding hypothesis predicts that spatial compatibility effects are reduced under non-spatial response instructions.

The general conclusion drawn from the literature review (Chapter 3) on the impact of response instructions on a variety of compatibility effects (i.e., response coding) has been that results are highly inconclusive with respect to the different coding hypotheses, at least where [page 133↓]non-spatial response instructions and response coding are concerned (see Chapters 3.1.5 and 3.3 for summaries).

In the empirical part of this thesis (Chapters 4 and 5), I therefore attempted to assess directly whether or not participants arbitrarily code their responses when so instructed, and whether non-spatial response coding can override spatial coding. The rationale underlying the experiments was to vary response instructions for manual (left and right) keypress responses to arbitrary stimulus attributes. This was done by instructing the response keys as either left vs. right keys (spatial instructions) or as blue vs. green keys (color instructions). Two experimental approaches were used to investigate whether and how instructions determine response coding. The first set of experiments (Experiments 1-3, Chapter 4) used a dual task procedure involving overlapping viz. non-overlapping responses on both tasks. In the second set of experiments (Experiments 4-5, Chapter 5), the dual task results were extended to a 1-trial-Simon type task with delayed position presentation.

In Experiments 1 and 4, spatially organized keypress responses were instructed spatially (i.e., by instructing the response keys as left and right), and overlapped with responses on a concurrently performed verbal task (i.e., “left” and “right” responses on the verbal task in Experiment 1) or with task-irrelevant position of go/no-go signals (Experiment 4). In both experiments substantial spatial compatibility effects were observed. Using the dual-task approach, Experiment 2 sought to generalize the spatial cross task compatibility effects to an arbitrary response dimension. To this end, manual responses as well as responses on the verbal task were instructed in terms of color. Substantial forward (i.e., verbal à manual) and backward (i.e., manual à verbal) color-based compatibility effects were observed. Finally, in Experiments 3 and 5 manual responses were again instructed in terms of color, but this time spatial coding was assessed. This was done by determining the compatibility effects resulting from ‘implicit’ overlap between non-spatially instructed manual keypress responses, on the one hand, with spatial concurrent responses (i.e., “left” and “right” verbal responses in Experiment 3) and with irrelevant stimulus position (Experiment 5) on the other hand. Both the spatial inter-task compatibility effects (Experiment 3) and the location-based Simon effect (Experiment 5) were found to be extremely reduced and statistically nonsignificant under non-spatial response instructions. These results have several important implications, which will be discussed in turn.


[page 134↓]

Implications for response coding. First, the backward and forward color compatibility effects between verbal and manual color responses under color instructions (Experiment 2) suggest that color instructions of manual responses, possibly assisted by repeated presentation of color patches, primed conceptual codes belonging to the color dimension that are or can be used in response selection. This finding extends demonstrations of inter-task consistency effects that used spatial response instructions and contradicts spatial coding accounts, which assume obligatory spatial coding regardless of response instructions (e.g., De Jong et al., 1994; Lu, 1997). Rather, it extends results on arbitrary code integration with practice by indicating that instructions may suffice to implement the intention to make color responses, thereby ‘coloring’ spatially organized keypress responses. Such a finding can be more easily explained by the two versions of the direct coding hypothesis according to which non-spatial (instructed) features can be used in the control of responding.

Experiments 3 and 5, on the other hand, suggest that color codes were not only part of the action representations, but that color coding can override spatial coding. More specifically, Experiments 3 and 5 suggest that color codes provide a viable alternative route to motor program activation (see Figure 13), and that codes can be weighed according to instructions. Accordingly, non-spatial response coding renders (irrelevant) spatial information less influential because spatial codes contribute less to responding. The results of Experiments 3 and 5 contradict coding accounts such as the DO model (e.g., Zhang et al., 1999) that can be considered instances of the weak direct coding hypothesis. Because these models assume comparable activation via the direct route for implicit and explicit (conceptual) overlap they predict spatial effects under color instructions. Rather, the results seem to support the intentional weighing hypothesis (e.g., Hommel et al., 2001) according to which intended (instructed) codes dominate how a response is represented and accessed. Therefore, response instructions seem to be at least in part responsible for how an otherwise identical (or very similar) task is performed, and whether (irrelevant) spatial information can be ignored or not. As a consequence, the present results also bear on issues of intentional control and automaticity, which will be discussed after some comments on my assumptions regarding the ‘format’ or nature of response codes (primarily) responsible for response selection according to my interpretation.

Some speculations in this regard seem in order to better relate the present interpretation to the existing literature, and to avoid confusion as to what I mean by ‘conceptual’ codes mediating response selection.


[page 135↓]

Figure 13: Sketch of the major theoretical implications regarding the impact of color instructions on response coding (adapted from Hommel, submitted).

When keys are instructed in terms of color, color codes are integrated into the response representation. Instructions pre-activate the codes of a particular dimension (location or color), rendering spatial information less effective primes under color instructions (see text for details).

As noted above, my results and interpretation seem most consistent with the theory of event coding (TEC; Hommel et al., 2001). However, TEC rather explicitly assumes that stimuli and responses are coded in terms of distal perceptually based codes in a common representational medium. On the other hand, some researchers propose that color compatibility effects represent some sort of symbolic compatibility, implying that some type of symbolic codes that are often identified with the linguistic system contribute to the effect. Still others, including me, propose that such effects are largely conceptual, that is, meaning based. What I mean by ‘conceptual’ is that internal representations of the instructed categories become an integral part of the task sets and are used to control instructed responding. More specifically, I believe that instruction understanding involves both, extraction of propositional representations and construction of quasi-analogous situation models (e.g., Johnson-Laird, 1983). This implies that, in my view, category representations or meanings contained in the task sets are not only intensionally defined (i.e., with reference to other categories; e.g., left as meaning ﹁right), but also extensionally. That is, in terms of their referents in the real or represented world (cf. Johnson-Laird, Chaffin, & Herrmann, 1984; also see Barsalou, 1999). Viewed this way, conceptual coding can be considered at least partially perceptual or quasi-perceptual.


[page 136↓]

Inhowfar such a notion of conceptual coding relates to ‘symbolic’ or ‘verbal’ coding views is less clear. This is so because the latter terms seem to be very loosely defined and to be used in apparently different meanings within the compatibility literature. First, it appears as if the two terms are often used interchangeably. That is, ‘symbolic’ is equated with ‘verbal’. Second, ‘verbal’ coding is not consistently defined. For example, translation models of the Glaser and Glaser type (1989; see Chapter 3.1.4) seem to restrict the terms ‘verbal labels’ or ‘verbal system’ to purely lexical representations (i.e., concept names) that refer to semantic representations but do not represent semantics. The other extreme (i.e., ‘linguistic’ codes referring to purely semantic representations; e.g., Mattes et al., 2002) or some mixture of both (i.e., verbal codes containing names and some elementary semantics; e.g., Umiltà, 1991) has also been proposed. At present, I do not see a convincing theoretical basis or empirical support for the view that verbal (in the sense of lexical) codes substantially contribute to manual color responses once a task set is implemented, at least when keys are not labeled in terms of color words (see Chapter 3.1.4, for labeling effects in the manual Stroop task). Therefore, in my view, the distinction between symbolic and spatial compatibility lacks motivation. Rather, both types of compatibility should be considered conceptual (cf. Alluisi & Warm, 1990).

This is not to say that retrieval of concept names (i.e., inner speech) might not be helpful in concept activation during implementation or reconfiguration of S‑R mappings (e.g., Emerson & Miyake, 2003; Goschke, 2000). However, I believe that verbal labeling processes mainly help to activate concepts, and thus may support implementation (and perhaps, consolidation) of S‑R rules, but become less relevant once task sets have been implemented.

Implications for intentional control and automaticity. At a general level, the present results also bear on issues of intentional control and automaticity. On the one hand, they speak to the functional basis of what Luria (1961) called the ‘directive function of speech’, that is, how instructions come to control behavior. Luria demonstrated that the ability to recall instructions does not necessarily imply the ability to follow instructions. For instance, he observed that young children and patients with frontal lobe lesions, while being perfectly capable of understanding and recalling instructions, nevertheless have problems to consistently behave as instructed. That is, they show deficits in ‘controlled’ behavior that bear some similarity with what has become known as ‘goal neglect’ (e.g., Duncan, Emslie, Williams, Johnson, & Freer, 1996). Thus, it seems as if instruction following requires the ability to translate instructions into internal models that can be used to control behavior. The present results sug[page 137↓]gest that instructions do not merely set up general constraints (e.g., by specifying the task-relevant stimulus category; see Chapter 2.1), but that the details or the specific contents of instructions (i.e., response instructions in the present study; but see, for example, Kunde, Kiesel, & Hoffmann, 2003, Exp. 3, for related findings concerning stimulus instructions) at least partially determine how internal models of the tasks are set up. More specifically,

the resulting task set is likely to reflect the way the task is understood and interpreted by the perceiver/actor and, hence, determines how stimuli are coded (e.g., which stimulus features are attended and linked to response features) [and] how responses are coded (e.g., which response features are attended and linked to response features) [...]. (Hommel, 2000, p. 266)

Regarding general models of action control, such as the Logan and Gordon (2001) model, this implies that response labels used in the instructions affect how parameter or parameter values are compiled from verbal instructions, and hence, how a task is performed. This should be considered by extensions of models like Logan and Gordon’s, which still need to specify how verbal instructions are transformed into parameters, and which factors determine how this is done.

While the present results indicate that instructional details such as arbitrary instructed categories can be used in responding (that is, determine parameter values or pathways that cannot be assumed to be in the default response repertoire), at least when the relevant response categories are consistently primed throughout the experiment, future research needs to address under which conditions this conclusion does not hold. Such research will need to consider findings indicating that, in some situations, instructions are not or not consistently followed.

For example, the findings from the response-effect compatibility literature (cf. Chapter 3.1.3) indicate non-instructed response coding after practice by demonstrating the use of irrelevant (often arbitrary) codes that may have been primed through practice and/or which may have proven useful for the task at hand. Similarly, Kunde et al. (2003; Experiments 2 and 4) demonstrated that the internal model of the target (stimulus) set can be fine-tuned after (relatively little) practice. Kunde et al. found that masked priming was restricted to those stimuli from the instructed target categories that were actually experienced as targets whenever targets could easily be distinguished from non-targets.

Therefore, it is conceivable that non-spatially instructed responses might become (spatially) re-coded after practice under less optimal conditions than in my experiments (e.g., without repeated priming of the instructed response dimension).


[page 138↓]

In a similar vein, other findings suggest that the details of instructions are ignored or re-interpreted on some occasions. For instance, Prinz, Tweer, and Feige (1974; cited in Eimer, Nattkemper, Schröger, & Prinz, 1996) found that participants who had to detect certain targets (e.g., the letters ‘A’ and ‘C’) in a visual search task were slowed on, or even reported, pseudo-targets (i.e., letters that had not been defined by instructions and that are introduced after relatively little practice; e.g., the letter ‘B’). This result indicates that participants performed the task by looking for items that deviate from their internal models of non-targets, rather than by matching the input to instruction-defined representations of the targets. Similarly, one possible explanation of the often observed interaction between compatibility effects in tasks with simultaneous S‑R overlap on two dimensions (e.g., the two-dimensional spatial mapping task and the H&M task; see Chapters 3.1.1 and 3.1.4, respectively) is that subjects re-interpret instructions and perform these tasks by applying ‘same’ and ‘different’ rules (i.e., logical recoding) to both task-relevant and irrelevant stimulus attributes.

In order to gain a more comprehensive understanding of how instructions are used to control behavior, and how instructed S-R rules are implemented within the cognitive system, future research will also need to generalize the present findings to more complex instructions and stimulus- and response arrangements. For instance, it should address whether other instructional factors than category labels, such as the specific stimulus and response examples given during instruction, the syntax of the instructions, and/or the order of mentioning also affect the contents of the resulting task set. That such factors might contribute to instruction understanding and task set configuration is suggested by findings from the text comprehension and problem-solving/reasoning literature (e.g., Johnson-Laird, Byrne, & Schaeken, 1992) on the one hand, and the learning literature (especially category learning and categorization; e.g., Nosofsky, Clark, & Shin, 1989), on the other hand.

In addition to providing insights into the functional basis of instructional and intentional control of behavior, the present work adds to and extends findings and reasoning on automaticity of S‑R translation and/or response activation. More specifically, the present findings seem to fit in nicely with a ‘prepared reflex’ view of automaticity (see Hommel, 2000, for a comprehensive discussion), which holds that (a) once implemented, even arbitrary S‑R rules are applied in an automatic (stimulus-triggered) fashion, but that (b) automatic response activation depends on how the task set is set up.


[page 139↓]

That is, the forward and backward compatibility effects in the dual task Experiments 1 and 2, as well as their lack of dependence on practice, add to the literature by showing that relatively little practice with arbitrary or even incompatible mapping leads to relatively strong automatic links that cannot be switched off when no longer needed (e.g., Hommel & Eglau, 2002; Proctor & Lu, 1999; Tagliabue et al., 2000). Second, Experiment 5 (and Experiment 3) adds to the evidence suggesting that the unconditional (direct) route is not as unconditionally automatic as sometimes assumed. Rather, instead of being primarily due to ‘intrinsic’ S‑R strength (either hard-wired or highly overlearned; cf. Lu, 1997; Lu & Proctor, 2001), automatic response activation seems to depend on (a) how the intended responses are coded, (b) the readiness to respond with a particular key (Valle-Inclàn & Redondo, 1998), and (c) whether presented stimuli match the represented trigger conditions on the stimulus side (e.g., Kunde et al., 2003).

A look back and ahead. In sum, the present work addressed the questions whether and to what extent the response labels used in experimental task instructions determine how responses are coded, and hence how behavior is controlled. The results presented in this thesis suggest that research participants code and access responses as instructed even when response labels refer to arbitrary, non-spatial dimensions, presumably by activation and use of the category representations that correspond to the instructed labels. The findings imply a high degree of flexibility of coding that, in turn, determines which side effects (e.g., impact of irrelevant stimulus attributes) will be observed.

However, in order to gain a more comprehensive understanding of instruction following, or what Luria (1961) called the ‘directive function of speech’, future research needs to determine the constraining conditions of such labeling effects and to generalize such effects to more complex instructions as well as response arrangements (e.g., four-choice responses). For example, one question would be when and how simple S‑R instructions are re-interpreted right away (e.g., in terms of same/different rules). Moreover, it will be interesting to see when and how learning modifies instructed responding. That is, under which conditions are ‘instructed’ task sets fine-tuned to task demands such that coding or weighing of codes is changed during practice? Of course, one would also need to address which processes (e.g., inner speech) afford implementation of S‑R rules in the first place. Research along these lines will not only inform theorizing about how actions are intentionally controlled, but would also [page 140↓]contribute to our understanding of how different types of automaticity depend on and relate to each other and instruction.


© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.
DiML DTD Version 3.0Zertifizierter Dokumentenserver
der Humboldt-Universität zu Berlin
HTML generated:
02.09.2004