[page 12↓]

1.  Introduction

1.1. Topic and overview

Faces are very important stimuli in social communication. They convey relevant information like identity, age or gender. Admittedly, there are also many other cues like speech, voice, and intonation or body postures that are relevant for recognizing a person or for interaction with that person. Nevertheless, faces are still the most important cues for communication. The act of encountering a familiar person’s face triggers both a process of recognizing facial familiarity as well as the processing of emotional or semantic information which in return is triggered by the multitude of information a face conveyes. Despite the seemingly complicated processes that are involved, we are all face experts using the skill of face recognition effortlessly and easily beginning at birth. Therefore, it is not surprising that face-like schematic visual patterns are more effective in capturing the attention of newborns when compared to non-face-like configurations (Johnson, Dziurawiec, Ellis, & Morton, 1991; Maurer & Young, 1983). It seems that already in infants the skill of recognizing individual faces has begun to develop, as demonstrated by their preference to their mother’s face (Bushnell, 2001). This may have a high ecological value in order to receive attention and help from the caring person – it is essential for survival.

Neuropsychological impairments can also provide insight into the importance of the ability to recognize faces in normal life. Prosopagnosia, a selective inability to recognize the identity of familiar faces despite intact visual recognition of other objects is a striking example. This impairment, which is mostly based on widespread bilateral lesions of the occipitotemporal cortex, is socially disabeling although patients can develop other strategies to compensate the problem. In addition, the selective loss of specific aspects of face recognition hints to the many processes that are involved in this skill. Firstly, a face has to be recognized as a face before many other aspects can be extracted. One can recognize the identity of a face and recall semantic information about a person. On unfamiliar faces one can still recognize the gender or possible age of a person. Facial expression as another aspect of face recognition is very important in social communication. All these various kinds of information can be used for further cognitive processing.

This dissertation will focus on the recognition of facial expression and identity of familiar and unfamiliar faces. Based on common observation, the recognition of facial expression and of facial identity seems to function easily in everyday life. We can recognize a familiar face out of innumerable faces and extract information about facial expressions in just [page 13↓]a moment. Moreover, the recognition of facial expressions appears to be independent of identity for we can recognize a facial expression independent of the familiarity of a person. In return, we do not need the facial expression for recognizing a persons identity. On the other hand, situations can occur in which one has to look twice to recognize a familiar person because we have never seen her with that particular expression. In addition, most people would contemplate whether an unfamiliar person is familiar or not when this person smiles at us in the street. These two examples show that there is also reason to assume an interaction between the processing of facial expressions and identity.

The present dissertation raises as its main question whether there is a facilitative interaction between the perception of facial expression and facial familiarity on a cognitive and functional basis. Although most of the data suggest an independence of facial familiarity and facial expression (e.g. Young, McWeeny, Hay, & Ellis, 1986; Young, Newcombe, de Haan et al.,1993; Bobes, Martin, Olivares, and Valdés-Sosa, 2000) recent data lead to the conclusion that there exists a facilitative interaction of both processes in one or the other direction. Recent studies suggest an interaction between facial familiarity and the discrimination of facial expression (Baudouin, Stansone & Tiberghien, 2000; Boudouin, Gilibert, Sansone & Tiberghien, 2000a; Schweinberger & Soukup, 1998), as well as between facial expressions and the discrimination of familiar faces (Endo, Endo, Kirita & Maruyama, 1992). According to these findings it is hypothesized that facial familiarity may facilitate the discrimination of facial expression. On the other hand, facial expression may also influence the decision that a face is familiar.

Now, I want to give a short overview of the present dissertation. In the introduction two models of face recognition are outlined and just a brief insight is given into empirical controversies concerning the main question of an interaction of facial expressions and facial familiarity. In addition, all basic principles, the paradigm, and the methods used are explained in order to comprehend the two subsequent experimental parts. The introduction is followed by two experimental parts. Part I introduces experiments which raise the question as to whether there is a facilitative interaction between facial familiarity and the discrimination of facial expressions. Part II investigates whether facial expressions can act as a facilitative on the decision of whether a face is familiar or not.

Detailed empirical evidence is cited in the introduction of the Part I leading to the main hypothesis of a facilitative interaction between facial familiarity and the discrimination of facial expression. Six experiments are reported which applied an expression discrimination task and various dependent measures. Experiment 1 to 3 used personally familiar and [page 14↓]unfamiliar faces and tried to elucidate the main question by means of performance data, event-related potentials (ERPs), and skin conductance response (SCR). A possible interaction of the processes in question should be reflected in reaction times (RTs) and error rates. The ERP components served as time markers that are assumed to be linked to particular functional processing stages. The behavioural data and ERP components in combination with their overall picture can give a hint to the functional cognitive processes that are facilitated – that is, the functional locus of a possibly observed interaction. The performance data of Experiments 1 to 3 suggest a facilitative interaction beween facial familiarity and the discrimination of facial expression. Although not clear-cut, a facilitative interaction for late perceptual processing stages was observed as was reflected by the ERP data. Therefore, it was intended in the subsequent Experiments 4 and 5 to improve the control over the stimulus material by using a stimulus set with unfamiliar faces. A learning procedure was applied in order to familiarize one half of the faces. Participants had to perform an expression discrimintion task in a consecutive test phase. In contrast to the behavioural data of the previous experiments no interaction was found between perceptual familiarity and the discrimination of facial expression. This was rather unexpected and may be due to the lack of semantic information of the experimentally familiarized faces. Therefore, the following Experiment 6 used famous faces as a stimulus set because semantic knowledge can be presumed. Again, behavioural data and ERPs were used as dependent measures. Contrary to the predicted effect no facilitation due to facial familiarity was observed for the expression discrimination task. At the end of Part I a conclusion is drawn concerning the main hypothesis of an interaction between facial familiarity and the discrimination of facial expressions. The results suggest that a facilitative interaction of facial familiarity and the discrimination of facial expressions is observable under certain circumstances. Based on the results of Experiments 1 to 3 the interaction was pronounced for personally familiar faces displaying happiness. ERP results, although not clear cut, suggested late perceptual processing stages as the functional locus of interaction as indexed by the P300 component. An effect was only observed for personally familiar faces displaying happiness. This suggests that also the degree of familiarity as well as the kind of emotional expression may play an important role for an interaction of facial familiarity and facial expression discrimination.

In Part II the question is raised as to whether there is also an interaction of facial familiarity and facial expression in the opposite direction. Results are cited leading to the hypothesized interaction between facial expression and the discrimination of facial familiarity. Hence, Experiment 7 employed a familiarity discrimination task using personally [page 15↓]familiar and unfamiliar faces. Again, behavioural data and ERPs were recorded. In line with the results of the previous part the data suggested an interaction between facial expression and the discrimination of facial familiarity only for personally familiar faces. This time, clear-cut ERP results pointed to response selection as the facilitated functional processing stage. In the last experiment (Experiment 8) participants had to perform a learning procedure and a subsequent familiarity discrimination task on experimentally familiarized and unfamiliar faces. No evidence for an interaction between facial expression and the discrimination of facial familiarity was found for familiarized and unfamiliar faces. Again, the degree of familiarity might be important for an interaction between both processes as it was only observed for personally familiar faces.

Finally, all collected results are discussed thoroughly and disproportioned to each other in the general discussion. A conclusion is drawn in the attempt to answer the raised hypotheses and questions.

1.2. Empirical overview and paradigm

1.2.1. Face recognition and models of face recognition

In the last decades face perception has become an important field in cognitive science and the body of literature addressing the issue of face recognition has grown to reflect its importance. Yet, as always, there are still many unclear points concerning the involved functional architecture, as well as the neural correlates of the already mentioned cognitive, emotional, and also automatic processes that are triggered by the multitude of facial information. Although the various processes involved in face recognition work easily in everyday life, it is complicated to explain them on a cognitive and functional basis. Many attempts have been made to model the involved processes and different aspects of face recognition (Bruce & Young, 1986; Burton & Bruce, 1993; Haxby, Hoffman, & Gobbini, 2000). However, the framework of a dissertation is narrow and can only mention the models that are relevant to the broader question. Only a short introduction is given about the empirical literature concerning the topic of face recognition.

One of the most influencial models of face recognition was introduced by Bruce & Young (1986; Fig. 1). It is a funtional model and based on empirical results as well as on data derived from clinical observations of patients who suffer from the selective loss of different aspects of face recognition. The model assumes several specialized modules which subserve the functional processes. The hierarchically ordered modules are thought to work in parallel [page 16↓]and independently. Seven distinct codes are proposed as output information of the functional modules which can be derived from faces. An expression-independent description (structural code) is extracted from the first view-centered pictorial description (pictorial code) in an initial structural encoding process. This output information is matched with a face recognition unit (FRU) which is thought to exist uniquely for familiar faces. If the face is familiar, a following person identity node (PIN) contains semantic information about the recognized person (identity-specific semantic code). Finally, for familiar faces the name can be recalled (name code). The other processes of expression analysis (expression code), directed visual processing (visually derived semantic code), and facial speech analysis (speech code) are based on the earlier pictorial code as they can be performed on both unfamiliar and familiar faces.

Figure 1. The functional model of face recognition by Bruce and Young (1986)

The model makes also assumptions about the recognition of facial expression which is important for the topic of the present dissertation whereas other functional models do not (Burton, Bruce, & Johnston, 1990; Breen, Caine, & Coldheart, 2000). It is assumed that the recognition of facial familiarity and of facial expressions function in parallel and also independently. Both processes rely on different codes, the pictorial code and the view-independent structural code, respectively. Although the model of Bruce and Young (1986) can explain a wide range of empirical results (Young, 1998) there are also findings which contradict the assumptions of their model (e.g. Rossion, 2002; Baudouin et al., 2000; Schweinberger & Soukup, 1998; Endo et al., 1992). Especially the proposed independence and serial succession of the functional modules is questioned by these results and imply [page 17↓]further research. The model has been complemented (Burton et al., 1990) in the last years and modifications have been suggested (Abdel Rahman, Sommer, & Schweinberger, 2002). Nonetheless, the original model by Bruce and Young (1986) and the refined version (Burton et al., 1990) has proven to be convenient to generate hypotheses as it can explain a wide range of results from studies that have been published since then (Le Gal & Bruce, 2002; Eimer, 2000; Bentin & Deouell, 2000; Ellis, 1989). However, it is a functional model which is all-embracing but only explains the gross processes. It is also important to understand the specific information which is processed by the proposed functional modules.

The recognition of faces can rely on different features or cues which supply information. Differentiations can be made between “first order” features such as eyes, nose, hair, mouth or shape information and “second order” features mainly including the configuration or spatial arrangement of the first order features. In addition, pigmentation and texture of the skin can also convey relevant information for facial recognition. Another differentiation can be made between external or „cardinal“ features (Ellis, 1986) like hair, hairline or face shape and internal facial features (eyes, nose or mouth as well as their spacial arrangement). Ellis, Shepherd, and Davies (1979) found facilitated recognition from internal, when compared to external, features only for famous faces. It is suggested that unfamiliar face recognition and face matching relies more on external facial features (Bruce, Henderson, Greenwood, et al., 1999). In contrast, internal features gain importance the more familiar a face becomes (Young, Hay, & Ellis, 1985) as they can vary less than external features (Bruce & Young, 1998). It is obvious that the recognition of facial expression relies mainly on internal features. One could suppose that overlapping information which is used for the recognition of facial familiarity and facial expression may cause an interaction between both processes. However, Calder, Young, Keane et al. (2000) examined this question by applying the composite effect to analyze the configural features that are used to discriminate facial expression or identity. The composite paradigm (see Bruce, 1988) shows that the recognition of configural features for facial identity or expression is disturbed when the top and bottom half of two different individuals or facial expressions are aligned (composite effect). For an expression discrimination task results of Calder et al. (2000) revealed composite effects in RT which were independent of the identities represented by the facial top and bottom half. For an identity discrimination task the composite effect was also independent of the expressions displayed by the two facial halfs. In another study Calder, Burton, Miller et al. (2001) applied a principle component analysis (PCA) of the pixel intensity information to the Ekman and Friesen (1976) expressive faces. The resulting factors from the PCA were analyzed further [page 18↓]with a linear discrimination procedure in order to identify the factors that are most important to recognize facial expressions, facial identity or the gender of a face. The computational procedure revealed that the coding of facial expression relies largely on different components when compared to the coding of identity. Both studies of Calder et al. (2000; 2001) suggest that the configural information which is used to recognize either facial identity or facial expression is different and only overlapping partly.

Many studies have implied face recognition proceedes in a series of separable stages or functional processes (Campbell, Brooks, de Haan & Roberts, 1996; Nachson, Moscovitch, & Umilta, 1995). These processes can be selectively impaired as is evident from lesion studies. Just one example is prosopagnosia (see above), the inability to recognize previously familiar faces. In these patients the ability to recognize and match unfamiliar faces is still intact. Hence, different processes can be assumed which subserve the recognition of familiar and unfamiliar faces. Individuals who suffer from prosopagnosia do not recover from their impairment and are only able to compensate for it with nonfacial cues like the voice or clothing of a familiar person. This suggests that face recognition might rely on a specialized neuronal system in the brain (Kanwisher, McDermott, & Chun, 1997). In the last decade there has been an unsolved controversy concerning the topic of whether this system is independent of the general visual object recognition system (Gauthier & Tarr, 1997; Gauthier & Logothetis, 2000).

An important question, within face recognition research, attempts to isolate the underlying neuronal substrate which is involved. The loss of the ability to recognize previously familiar faces after damage of certain brain regions in patients with prosopagnosia may hint to the importance of these brain regions for familiar face recognition. This impairment recognisable mainly after bilateral damage of the inferior temporal and occipitotemporal cortex (Tranel, Damasio & Damasio, 1988; Damasio et al., 1986) although cases were observed after unilateral damage in the right hemisphere (Uttner, Bliem, Danek, 2002). These findings points to a right hemisphere advantage for face recognition which also has been suggested in other studies (Ellis, 1989; Schweinberger & Sommer, 1991; Rossion, Schiltz, & Crommelinck, 2003). Most of the studies, with heathy participants, addressing this question use functional magnetic resonance imaging (fMRI) or positron emission tomography (PET). Both methods have a high spatial resolution. Studies using ERP data,with the advantage of a high temporal resolution, give a nice complement in order to explicate the temporal interplay of the involved structures and processes. The model of the distributed human neural system for face perception (Figure 2) proposed by Haxby, Hoffman, and [page 19↓]Gobbini (2000) brings together the most relevant results of fMRI, PET and also ERP studies from the last years. Hence, it is a good summary of the knowledge concering the neuronal substrate underlying face recognition. Haxby et al. (2000) identified a core sytem in the occipitotemporal visual extrastriate cortex. The inferior occipital gyri is important for the initial visual analysis of faces. Projections to the lateral fusiform gyrus and to the superior temporal sulcus subserve the analysis of invariant aspects (identity) as well as of changeable aspects of faces (facial expression, eye gaze, or lip movement). The core system is supported by an extended system including brain regions that are important for several aspects of face perception and processing but also for other cognitive tasks. It acts in concert with the core system and includes processes that facilitate spacially directed attention to faces, speech or expression perception as well as the processing of semantic mediated information. Although the model shares some elements of the afore mentioned model from Bruce and Young (1986) it is much more related to today’s knowledge about the neural system. Therefore, it may provide more plausible predictions for experiments that are concerned with questions of face perception and face processing.

Figure 2. A model of the distributed human neural system for face perception by Haxby, Hoffman, and Gobbini (2000).

Despite the model of Haxby et al. (2000) relying on a strong body of experimental results, there are still many unclear points and open questions. For example, the functional separation of the different regions, the temporal properties of the processes and interactions among the regions via back projections or links, or the role played by the lateral fusiform gyrus in expression perception because of possible characteristic expressions between individuals (Haxby et al., 2000). In addition, the distributed system may allow interactions of the functional processes that are ascribed to the different regions through the rich interlinking of the brain regions. Interactions between regions via neural linking and the temporal [page 20↓]sequence of processing might be important prerequisites of an interaction between facial expressions and facial identity. The model allows this possibility, although it is unspecific about an interaction between both functional processes.

In summary, the research on face recognition has gained importance over last decades. The functional model of face recognition by Bruce and Young (1986) has itself proven to be influential and it is useful to explain a range of experimental results. Nonetheless, modifications based on conflicting results have been suggested. In general, the processing and recognition of faces relies on various featural, spatial, and configural information. An important question concernes the underlying neuronal system that subserves face recognition. The proposed model of Haxby et al. (2000) integrates the results of many studies examining this question.

1.2.2. Facial expression

Since the publication of the functional model of face recognition by Bruce and Young (1986) there exists a strong body of research which takes as its main question the recognition of facial identity. (There is a brief overview of this research above.) In contrast, fewer studies have been concerned with the recognition of facial expressions. Even though the research on facial expressions has a long tradition (e.g. Darwin, 1965). This imbalance only started to change in the last decade and still pertains (Calder, Lawrence, & Young, 2001a). In the paragraphs following, I will attempt to give a short introduction into the research of the recognition of facial expressions. The difference between the production of emotions and facial expressions within the sender and the detection or recognition of the facial expressions by the perceiver has to be pointed out. Although emotions can be perceived via different modalities (e.g. voice) or cues (facial cues, bodily gestures), the face seems to be the most important cue to perceive an emotional state. Thus, many studies focus exclusively on the perception of facial expressions.The production of emotions within the sender and also the expression of emotions via other non-facial gestures is not the topic of this dissertation. Hence, only a short overview will be given about research on the recognition and perception of facial expressions as well as the underlying functional and cognitive processes within the brain.

In his 1872 published book “The expression of the emotions in man and animal” Charles Darwin described in detail how humans and animals express facial emotions. Based on his thorough observations, he claimed that facial expressions are universal throughout all cultures and races, and they are not learned. They have their origions in the facial expressions of animals. After Darwins groundbraking work it almost took a century until his observations [page 21↓]were confirmed by systematic studies of Ekman and Friesen (1968; 1971). They portrayed several expressions such as happiness, fear, surprise, and disgust and displayed them to participants from different cultures (e.g. USA, Brazil, Chile, Japan). Independent of the cultural background the participants were able to identify the facial expressions correctly. In 1971 Ekman and Friesen even presented the photographs to people in New Guinea who had no contact to western or eastern literate cultures. When describing antecedent situations for a certain expression these people picked the correct pictures in almost all cases. Until today it is widely accepted that expressions are innate and not learned. They are complex patterns of facial muscular, and neuronal actions controlled by the central nervous system and triggered by specific stimuli (Ekman, 1984). However, it has to be mentioned that there are also gestures that are learned and culture specific. In addition, this might also hold true for the situational conceptualization of facial expressions or the learned suppression of expressed emotions in several situations.

The universality of facial expressions has led to the proposal of so-called basic emotions. Although the number varies between studies, the most popular categorization is reflected in the set of emotional expressive faces by Ekman and Friesen (1976). Displayed by several male and female individuals it containes facial expressions of anger, disgust, fear, happiness, sadness, and surprise. Previously it has been an issue whether the perception of facial expressions is categorical or dimensional. The proposed basic emotions imply a categorical perception of emotions. Observations of categorization errors of facial expressions led Woodworth (1938) to the assumption that emotional expressions are conceptualized along the continuums pleasantness-unpleasantness, and attention-rejection. One of the most influencial dimensional so-called circumplex models comes from Russell (1980). He introduced two bipolar dimensions of pleasure-displeasure and degree of arousal. Nonetheless, recent results using morphed faces and expressions support the notion that facial expression perception is categorical (de Gelder, Teunisse, & Benson, 1997; Calder, Young, Benson, & Perret, 1996).

It is also still up for debate, as whether facial expressions are perceived as parts or as a whole. Configural information of the whole face may play an important role for expression recognition because face inversion makes it harder to recognize facial expressions (de Gelder et al., 1997). It is already known from identity recognition that face inversion disturbes the configural perception and therefore impairs face recognition (Bentin, Allision, Puce et al., 1996; Rossion, Delvenne, Debatisse et al., 1999; Eimer, 2000a). Results of Puce, Allison, Asgari et al. (1996) suggest that the eye region plays an important role in facial expression [page 22↓]perception. Possibly, the relative importance of single features depends on the kind of expression because of different patterns of facial and muscular activation. It is conceivable that the eyes are more important for the perception of fear and anger. In contrast, the distinct feature for identifying happiness might be the mouth.

Recently, many studies have examined the neuronal system which subserves the recognition of facial expressions. As for face identity, recognition impairments of neuropsychological patients hint to specific brain regions which may be involved. It is strongly suggested that the amygdala plays a prominant role in the perception and recognition of fear (Adolps, Tranel, Damasio, & Damasio, 1994; Phillips, Young, Scott et al., 1998; Calder et al., 1996). In line with these clinical observations are also studies using fMRI or PET (Morris, Frith, Perret et al., 1996; Vuilleumiere, Armony, Clarke et al., 2002). Although fearful faces have often been used as expressive stimuli, recent studies also examine other facial expressions. Seemingly, a widely distributed system of brain structures subserves the recognition of facial expressions (Adolphs, 2002), and partly overlaps with structures which subserve face recognition in general (see Haxby et al., 2000). Many studies suggest different involvement of brain regions for particular expressions (Kesler/West, Andersen, Smith et al., 2001; Blair, Morris, Frith et al., 1999; Sprengelmeyer, Rausch, Eysel, & Przuntek, 1998; Whalen, Rauch, Etcoff et al., 1998). Structures like the limbic system, the occipitotemporal neocortex (Adolphs, 2002), or the sulcus temporalis superior (Haxby et al., 2000) are reported as important. In addition, the insula and basal ganglia proved to be relevant for the recognition of disgust (Calder et al., 2001a). This is underlined by selective impairments of disgust recognition in patients with Morbus Parkinson (Sprengelmeyer, Young, Mahn et al., 2003), a disease which is caused by the loss of dopaminergic neurons in the basal ganglia.

It was outlined in paragraph 1.2.2., that most researchers agree on a set of basic emotions. Accounts were given for the universality of these emotions because they are to a strong extent independent of cultural background or learning. Basic emotions seem to be categorically perceived. They seem to be recognized on part based facial information although configural information may also play a role. Neurophysiological results suggest a distributed system of brain structures which subserve the recognition of facial expressions. These structures are also important for face recognition in general. The involved brain regions might differ partly between several expressions.

1.2.3. Approach to the topic

According to the functional model of face recognition by Bruce & Young (1986), the processes in question, namely the recognition of facial expressions and of identity, are [page 23↓]assumed to be independent. It is apparent, that we do not need the information about ones facial expression in order to identify somebody. On the other hand, we can perceive the expression easily from familiar and unfamiliar people. The claim of independence was tested in a study by Young et al. (1986). Participants had to match simultaneously presented faces that were familiar or unfamiliar and had to react to identity or to facial expressions. Results revealed faster RTs for familiar than for unfamiliar faces in the identity matching task, but not in the expression matching task. According to the hypothesis of the latter task, familiarity is assessed by face recognition units that do not affect the structural encoding nor the expression analysis stage. Hence no difference in RT between unfamiliar and familiar faces is expected. Similar RT results were obtained from Bobes et al. (2000) in an identity and expression matching task. Simultaneously recorded ERPs revealed different topographical distributions of scalp potentials for both tasks and therefore provide evidence for the idea of distinct neural subsystems subserving the recognition of facial identity and facial expression recognition. Results from studies using fMRI point to the same conclusion of distinct neural correlates of facial recognition memory and the perception of facial expressions (Phillips, Bullmore, Howard et al., 1998). In addition, evidence supporting independence of the systems comes from the double dissociation of both processes in patients suffering from brain injury. Tranel et al. (1988) report three patients with Prosopagnosia, an inability to detect facial identity, whose ability to recognize facial expressions was preserved. In another patient study from Young et al. (1993) the authors found a selective deficit in the processing of facial expressions which was completely unrelated to the recognition of familiar and unfamiliar faces. The same conclusion is derived from results of Alzheimer’s Disease patients who were impaired in discriminating facial identities and in naming and pointing to different expressions while the discrimination of facial expressions was preserved (Roudier, Marcie, Grancher et al., 1998). Another line of evidene for the independence of facial familiarity and facial expressions comes from the N170 component (Bentin et al., 1996), an ERP component which has been linked to the stuctural encoding of faces (Eimer, 2000a). It has been shown that this component is insensitive to facial familiarity and facial expressions (Eimer & Holmes, 2002; Herrmann, Aranda, Ellgring et al. 2002). Hence, an interaction between facial expression processing and facial familiarity can be denied at least on early strucural encoding stages.

However, it might be of benefit, from an evolutionary perspective, to perceive the facial expression especially from familiar fellows in order to get reward or to prevent punishment. Furthermore, certain facial expressions like happiness or even sadness are more [page 24↓]likely to be expressed to familiar people. Therefore, finding an interaction between the perception of facial expressions and facial familiarity might be possible.

In addition, the computer analogy that the brain is organized in independent modules, which work serially and independently is not presentable anymore (Grossberg, 2000). With its rich interlinking the brain is easily capable of parallel and unifying information processing. The different neuroanatomical areas that are involved in the various aspects of face recognition are interconnected through many efferent and afferent links as can be drawn from the model of Haxby et al. (2000). Thus, interactions of the processes might be possible depending on the temporal properties and availability of different aspects of processed information. There is evidence that the processing of facial expressions starts as early as 80 ms (Pizzagalli, Regard, & Lehmann, 1999) to 120 ms after stimulus onset (Eimer & Holmes, 2002; ) in the human brain. Therefore, it stands to reason that the information extracted from expressive faces may modulate early structural face encoding processes (Pizzagalli, Lehmann, Hendrick et al., 2002; Sato, Kochiyama, Yoshikawa, & Matsumura, 2001).

Relevant to the topic of the present dissertation, the lateral fusiform gyrus, which is involved in the processing of invariant aspects of faces and identity (Haxby et al., 2000), is interlinked with the sulcus temporalis superior and also with the amygdala. Both areas are crucial for the processing of facial expressions. If information of the expressed emotions of a face is availlable early on, it may be used to boost attention or arousal. In return, following perceptual processes might work more efficiently. Krolak-Salmon, Fischer, Vighetto, & Mauguiére (2001) reported differential ERP activity between 250 and 750 ms in occipital and occipito-temporal areas that was related to emotional expression in a gender or expression counting task. They took this as support for top-down modulations of limbic (including amygdala) and frontal areas influencing extra-striate visual areas. It is also well proven that emotional stimuli, including expressive faces, can be processed more easily outside the focus of attention when compared to neutral stimuli (Fox, Russo, & Dutton 2002). Using fMRI, Viulleumier, Armony, Clarke et al. (2002) found increased activation in the amygdala for emotional expressive faces, on task irrelevant locations and independent of spatial attention. Emotional stimuli can guide focal attention to the relevant location because the amygdala is part of the attentional system (Eastwood, Smilek, & Merikle, 2001). Such mechanisms may have been important in the evolutionary development of many organisms to detect threats in the environment. Therefore, it is possible that under certain circumstances the recognition of facial expressions and identity may interact. Fast recognition of facial expressions, especially from conspecifics could have been relevant for survival in evolution. Furthermore, even if [page 25↓]identity and expression analysis use different functional and neuroanatomical components (Bruce & Young, 1986; Haxby et al., 2000) they are linked through the cognitive system and an interaction is not necessarily excluded.

Recently, there have been studies that suggest an interaction between facial familiarity and the perception of facial expressions and vice versa. Schweinberger and Soukup (1998) used a selective attention paradigm by Garner (1976) to address the question of an asymetric relationship between facial identity and facial expressions. Four different stimuli varied along the two dimensions identity (person A vs. B) and expression (happy vs. sad). Participants had to perform a speeded discrimination task on either one dimension which is called as relevant. Three different experimental conditions were applied. In the control condition the relevant dimension is varied between stimuli, whereas the irrelevant dimension is kept constant (e.g. only person A displays a happy or sad expression in case of an expression discrimination task). In the orthogonal condition both dimensions varied orthogonally (person A and B displayed both facial expressions, respectively). For the correlated condition both stimulus dimensions are correlated (e.g. person A displayed only the happy expression whereas person B displayed only the sad expression). An increase in RT would be expected for the orthogonal condition when compared to the correlated one in case of an influence of the irrelevant dimension on the relevant one. Accordingly, Schweinberger and Soukup (1998) found increased RTs for the orthogonal condition of the expression discrimination task when compared to the correlated condition. This did not hold true for the identity discrimination task. The results point to an asymmetric interaction between facial identity and the discrimination of facial expressions but not vice versa. There is evidence for an interaction in both directions for famous versus unfamiliar faces. The familiarity of a face can facilitate the discrimination of expression. In a study by Baudouin et al. (2000) participants had to discriminate neutral from happy facial expressions. It was expected that an interaction between facial familiarity and the discrimination task would only emerge if expression discrimination is slowed down by a hard condition. Therefore, faces were displayed either with a shortened presentation time of 15 ms (vs. 400 ms) or with a concealed mouth (vs. the whole face). Results revealed a facilitation of the expression discrimination for famous faces when compared to unfamiliar faces only in the hard conditions. The authors concluded that facial familiarity increases the “perceptual fluency” and therewith the recognition of facial expressions under hard conditions.

Conversely, there is also evidence that facial expressions have an influence on the perception and recognition of familiarity (Baudouin et al., 2000a; Nagayama, 1999). In [page 26↓]addition, differential effects were found for personally familiar and famous versus unfamiliar faces when performing a familiarity discrimination task (Endo et al., 1992). The recognition of personal familiarity was facilitated when faces displayed a neutral expression when compared to happy and angry expressions. In contrast, famous faces were recognized faster with a happy expression. It was argued by the authors that a neutral expression is more frequently seen on personally familiar faces whereas famous faces are more often seen with a happy expression.

Although all of these studies suggest an interaction of the perception of facial expressions and facial familiarity in one or the other direction some of them suffer from methodological insufficiencies. In the study of Schweinberger and Soukup (1998) a small stimulus set was used displaying only two different individuals. The paradigm of selective attention as introduced by Garner (1976) was originally designed to explore the perception of simple stimulus dimensions such as shape or colour. Therefore, it is not designed to handle the processing of such complex facial information with overlapping features. A detailed and critical review about the implementation of the Garner-paradigm on facial perception is given by Kaufmann (2002). There is a main problem that arises from using a small set of facial stimuli. Facial expressions and facial identity share at least some overlapping features (Calder et al., 2000), therefore it is not possible to increase the variability of the irrelevat dimension in the orthogonal condition without also affecting the variability of the relevant stimulus dimension. This latter increase of variability can lead to stimulus based differences between the orthogonal and the control condition that are not based on interactions between the two processes. Another important objection, especially when questioning the interaction of facial expressions and facial identity, are different picture based strategies that can be used within both tasks (Kaufmann, 2002). In the identity decision task pictorial strategies might be used to discriminate the individuals based on non-facial cues like overall contrast of the pictures. Such effective strategies may not be possible within the expression decision task because information about expressiveness relies only on internal facial features. Accordingly, this will lead to increased variability of the relevant stimulus dimension within the orthogonal condition when compared to the irrelevant dimension which should exclusively be increased in variability. Kaufmann (2002) was not able to replicate the results of Schweinberger and Soukup (1998) by trying to consider the problems of the Garner-paradigm when using faces as stimuli. However, when using a different paradigm an interaction may emerge.

It can be annotated, critically, to the study of Baudouin et al. (2000), that the perceptual variation (concealed mouth or short presentation time) in order to aggravate facial [page 27↓]expression discrimination, may have affected the two particular expressions differently. Possibly, the mouth region is more important for the recognition of happiness when compared to the neutral expression. On the other hand, the perception of identity for familiar people relies more on internal facial features when compared to unfamiliar faces. This implies that the variability in the hard condition was not evenly distributed over the critical dimensions of facial familiarity and facial expression. Hence, an interaction of the hard/easy condition and familiarity in the expression discrimination task may arrise because of the differential effect of the perceptual manipulation on the variability of the familiarity dimension. Evidently, more research is needed to clearly speak for or against an interaction of facial familiarity and facial expressions.

To briefly summarize, experimental data were cited which favour the independence of facial expressions and facial familiarity. This is underlined by clinical observations from patients which suffer from selective impairments of one or the other process. Electrophysiological and functional imaging studies point to separable neuronal subsystems underlying both processes. However, an interaction of facial expressions and facial familiarity is reasonable from an evolutionary point of view and when considering the rich afferent and efferent linking within the brain. Temporal properties of the involved processes may also be important for a possible interaction. There is a strong body of evidence showing that facial expression information is processed early in the brain. Thus, other processes that are involved in face recognition might be affected by a top-down influence from this information. Most importantly, recent data suggest an interaction between the perception of facial expressions and facial familiarity. Thus, some studies suffer from methodological problems. Therefore, more research is needed to clarify the raised controversy.

The present dissertation is an attempt to elucidate this question with a 2-choice RT-paradigm. By means of ERPs a closer look is taken into the temporal properties of the involved processes and the functional locus of interaction. All principles and the basis of the experimental paradigm will be explained in the following section 1.2.4. In addition, an overview is given about the ERP components that are important for the hypotheses of the experimental parts.

1.2.4. Mental Chronometry and Cognitive Psychophysiology

The experimental logic of the present dissertation is mainly based on the overall paradigm of mental chronometry together with cognitive psychophysiology. Therefore, it is important for the reader to get an introduction to this methodology in order to comprehend the experimental design, the working model, and the derived hypotheses (in the experimental [page 28↓]part). Due to the narrow framework of the dissertation this introductory overview is far from complete and will mention only the most important and relevant milestones in the history of mental chronometry and cognitive psychophysiology. Several important models can just be sketched roughly.

A major theme in the study of mental chronometry is the question of whether there are separable processing stages witin the cognitive information processing system and how these stages communicate with each other. The assumption is, that mental processes are time-consuming. The sum of all processing is reflected by behavioural data like the RT or error rate. The general experimental paradigm consists of a series of imperative stimuli (auditory, visual, or somatosensory) and a required response mostly as fast and correct as possible. Sometimes a warning stimulus is included ahead of the imperative stimulus either with or without provided information about the upcomming stimulus. Depending on the proposed model of cognitive processing, different hypotheses can be drawn concerning the dependent measures.

Mental chronometry (Posner, 1978) has a long history in the study of human information processing (Meyer, Osman, Irwin, & Yantis, 1988). Already the astronomers in the 19th century searched for ways to measure the speed of mental processes because of individual differences in the subjective measurement of the movement of stars. Bessel (1823, cited after Meyer et al., 1988) for instance introduced a personal equation to measure the difference between these estimates for two different observers.

In 1850 Hermann von Helmholtz (cited after Meyer et al., 1988) introduced the simple RT procedure making him the most important forerunner of modern mental chronometry and cognitive psychophysiology. With his procedure he was able to estimate the rate of neutral conduction by considering the difference in RT between a simple reaction after a tactile lower limb stimulation compared to an upper limb stimulation. The mean RT of the latter task is somewhat shorter when compared to the mean RT of the affore mentioned. The RT difference being caused by the longer distance that the sensory nervous singal has to pass from the lower limbs.

Another major development was the subtraction method and the introduction of the choice RT procedure by Donders (1868, cited after Meyer et al., 1988). His technique used three types of RT-procedures in combination to calculate the duration of putative stimulus discrimination and response selection stages and the simple motor response. By subtracting the RT in the simple RT-task (which is supposed to consist just of the motor process) from the choice RT-task (including all three stages in question) and a go/nogo RT task, which does not [page 29↓]include the response selection stage, the duration from either of the three stages can be estimated. Some assumptions are necessary to apply this method. First, the stimulus discrimination and the response selection stage are in strict succession and combine additively. Second, any processing stages may be inserted or deleted in a pure fashion.The procedure’s assumptions have major shortcomings which were pointed out by Külpe (1893, cited after Meyer et al., 1988) and caused a fall in favour for this method. It became quiet in the field of mental chronometry after the criticism by Külpe, although there were still some important develpoments like the discovery of the psychological refractory period by Telford (1931, cited after Meyer et al., 1988), the research on perceptual and response competition by Stroop (1935, cited after Meyer et al., 1988), or the calculation of speed-accuracy tradeoff curves from movement control tasks by Woodworth (1899, cited after Meyer et al., 1988). A speed-accuracy tradeoff curve is a function of error rate versus RT in a given task. In general it reveales a tradeoff between accuracy and movement speed, and shows that faster RTs lead to increased error rates and vice versa.

Stimulated by new developments in computer and communication science around the 1950’s the chronometric paradigm regained importance (Meyer et al., 1988). This was mainly a result of the use of new tools to collect and process data as well as to test models of human information processing. Picking up the ideas of Donders (1868, cited after Meyer et al., 1988) scientists searched for alternative methods to study the durations of human information processing stages. In 1969 Sternberg developed the additive-factor method (AFM) to analyze RT data. The AFM overcomes the problematic assumption of pure insertion of processing stages. At least two experimental factors should be manipulated in order to draw conclusions from their RT effects on the involved stages. Additivity in RT is observed when both factors are manipulated simultaneously and the first factor affects the RT independently of the level of the second one. In contrast, an interaction is observed when one factor shows different effects on RT depending on the level of the second factor. If, for instance, two factors show additive effects on the RT they may influence two different stages in the information processing chain. On the other hand, if two factors show an interaction on RT they act on at least one processing stage in common. The AFM still has some problematic assumptions. It assumes serial stages without temporal overlap and a discrete output of the stages. Keeping in mind the rich interlinking of structures in the human brain these assumptions seem implausable. Anyhow, the AFM is a powerful tool to analyze RT data. Since its publication innummerable studies have relied on this method. Sanders (1998) gives a nice summary of the research about human performance including the AFM. At least six independent and [page 30↓]physiologically plausible processing stages have been identified: preprocessing, feature extraction, feature identification, response selection, motor programming, and motor adjustment (Sanders, 1980).

Besides the AFM there are also other models concerning human information processing. In contrast to the assumption of serial processing their stages may overlap in time and work in parallel. The cascade model by McClelland (1979) assumes continuous information transmission between the distinct functional processing ‘levels’. The continuous information output from one processing level can be used by another processing level. This may enable processes to work in parallel. If a certain activation threshold is exceeded, a response is executed at the end. Until today several results point to the existance of parallel processing stages (Abdel Rahman et al., 2000; Miller, 1982, 1983; Eriksen, & Schultz, 1979). Another important issue is whether the information transmission between parallel processing stages consists of continuously increasing activation (McClelland, 1979) or of discrete chunks of information (Miller, 1982).

Although the methods for analyzing RT data described above are helpful to draw hypotheses and conclusions about the underlying processing stages, they are yet only based on the final output of many processing stages in common. In addition, the methods suffer from their more or less physiologically plausible presumptions. Psychophysiological measures were used within chronometric paradigms in the last quarter of the 20th century in the hope of getting more direct measures of the underlaying processing stages. This so called ‘marriage between psychophysiology and cognitive psychology’ (Coles, 1989) brought benefits by using ERPs that are regarded as markers of physiological processes. ERPs are electrical potentials that are time locked to a specific event caused by the simultaneous activity of populations of neurons in the brain (Coles, Gratton, & Fabiani, 1989). If neurons have an optimal allignment they form an electrical dipole and the signal can be recorded at scalp electrodes. However, the recorded electroencephalogram (EEG) reflects all electrical activity within the brain that may not be related to a specific event. Usually, the event related signal is not visable within the noisy EEG. Therefore, ERPs are derived from the EEG by averaging samples of the EEG around this particular event. Hence, the ERP signal is discriminated from the randomly varying backround noise of the EEG. The result is a voltage by time function with positive and negative voltage peaks. Each peak or component can be described in numerous ways – according to polarity, latency, experimental / psychological precondition or according to topographical distribution on the head surface. In general, offline [page 31↓]the ERP data are treated further by setting a baseline or applying a low-pass filter in order to increase the signal-to-noise ratio.

There are different possibilities of the quantification of components as they can be used as dependent measures in an experimental design. One possible latency measure is the onset latency of a component which represents the time in ms from the onset of the imperative stimulus to the onset of the component. The onset is defined in a certain way, e.g. the point in time when the signal exceeds a threshold. As another latency measure the peak latency is used, the time in ms from the onset of the imperative stimulus to the highest peak of a component. A third latency measure will be the time in ms from the onset of a particular component to the overt response. In addition to the latency measures, components can be described by using the baseline-to-peak amplitude (in µV) or an area measure by calculating the mean amplitude (in µV) in a defined time range.

In the present dissertation I will use three different ERP components. They are briefly outlined, in order to understand the experimental design and hypotheses of this dissertation.

The first ERP-component to be considered is the N170, most prominent at posterior-temporal and occipital sites. This component is selective for faces (Bentin et al., 1996; Eimer & McCarthy, 1999), although also emerging for single face parts like eyes (Eimer, 1998) and other face-like and well learned stimuli that are distinguishable on an item level (Carmel, & Bentin, 2002; Gauthier, & Tarr, 1997). In addition, it is delayed and enhanced in amplitude for inverted (Eimer, & Holmes, 2002; Rossion et al., 1999; Rossion, Gauthier, Tarr et al., 2000) and contrast reversed faces (Itier, & Taylor, 2002). The mentioned results indicate that the N170 is associated with the formation of a visual representation of a face like stimulus and may reflect the functional process of structural face encoding (Eimer, 2000; Bentin et al., 1996) as denoted by Bruce and Young (1986). Another frequent finding concerning the N170 is the insensitivity to facial familiarity and facial expressions (Eimer, & Holmes, 2002; Bentin & Deouell, 2000). Owing to these properties I propose the N170 to be the first functional marker in the working model outlined below which is supposed to be linked to the initial structural encoding process of a face (see Bruce & Young, 1986).

As another functional marker I use the P300 component, a positive centroparietal deflection peaking not earlier than 300 ms poststimulus. Its amplitude and latency are meaningful for the nature and timing of a participant’s cognitive response to a stimulus (Johnson, 1986). Usually it is elicted in an oddball task where the amplitude increases for infrequent stimuli of the auditory (Duncan-Jahnson, & Donchin, 1977) or visual modality. At this point it is noted that novel non-target stimuli sometimes elicit a more frontally distributed [page 32↓]earlier positive deflection that has been referred to as the ‘P3a’ (Snyder, & Hillyard, 1976; Harmony, Bernal, Fernández et al., 2000). Distinction has been made to the ‘P3b’ or P300 as described here, which usually occures after target stimuli. Properties of the P300, like an increased amplitude depending on stimulus probability, on attention (Dujardin, Derambure,Bourriez et al., 1993), on stimulus complexity (Verbaten, 1983), or on subjective stimulus value (Begleiter, Porjesz, Chou, & Aunon, 1983) met in different functional assumptions and models. Frequently adopted is the view that the P300 reflects the online-updating of the working memory (Donchin, & Coles, 1988; Sommer, & Matt, 1990). It is undisputable that the P300 also reflects attentional function (Holdstock, & Rugg, 1995; Dujardin et al., 1993). Most importantly, it is elicited after the categorization of task-relevant stimuli. Therefore, its peak latency is highly correlated to the task relevant dimension. Findings that the P300 latency is affected by stimulus discriminability and degradation (Kutas, McCarthy, and Donchin, 1977; McCarthy & Donchin, 1983) indicate its contingency to perceptual stimulus evaluation. These last two mentioned properties of the P300 component will be most crucial for the working model proposed below.

The third event-related component, the LRP (Coles, 1989), is used to indicate the preparation of the response hand (De Jong, Wierda, Mulder, & Mulder, 1988). It is derived from the readiness potential (Kornhuber, & Deecke, 1965) which is recorded at the electrode sites C3’/C4’. They are placed over the left and right primary motor cortices (M1). In case of preparation of one or the other response hand the negativity at the contralateral recording site is increased because M1 controls the contralateral body side respectively. This asymmetry in the recorded scalp potential at the electrode sites C3’ and C4’ is isolated by a specific calculation procedure (as described below; or see Coles, 1989). The result is a negative going LRP for correct movements as well as a positive deflection for incorrectly prepared movements. The generation of the LRP is at least partly ascribable to M1 (Eimer, 1998). This is supported by the observation of a positive LRP after correctly prepared foot movements (Brunia, & Vingerhoets, 1980). Activated neurons in the M1 controlling the lower extremities are embedded in the central sulcus. They generate an electrical dipole which emanates to the contralateral hemisphere. This results in increased negativity over the ipsilateral movement side. Hence, according to the calculation procedure a positive LRP is derived.

Osman, Moore, and Ulrich (1995) proposed to separate the information processing from stimulus to overt response into two intervals with the LRP. The first interval from stimulus presentation until the beginning of central response activation – that is, the LRP – is calculated synchronized to the stimulus (S-LRP). It is informative about the time demand of [page 33↓]processes running before or during the response activation (e.g. Leuthold, Sommer, & Ulrich, 1996). The second interval from the onset of the LRP until the overt response is averaged synchronized to the response (LRP-R). It indicates the time demand of motor processes beyond central response activation. Depending on the processes that are affected by an experimental manipulation latency differences can be expected in one or the other interval. If it acts on processes before or during central response preparation, the S-LRP interval will be affected. On the other hand, effects can be seen within the LRP-R interval if motor processes beyond hand activation are affected by the manipulation.

1.3. Experimental design, model and hypotheses

1.3.1. Working Model

As outlined in section 1.2.3. there is reason to believe that the processing of facial expressions and facial identity may be interdependent under some circumstances. There are still many unsolved aspects concerning such interdependency as well as the prerequisits and conditions under which an interaction might be observed. In the present dissertation I elucidate upon the question of whether the recognition of facial expressions and of facial identity can interact under certain conditions. Most importantly, the functional locus of this interdependency comes into question. Performance data and ERP components can help to pinpoint the functional locus within the cognitive system. Because of the used paradigm and tasks participants had to perform, it is possible to take a closer look at the functional chronometry of the perception of facial expressions and identity independent of each other.

In order to clearly outline the hypotheses and draw predictions for all following experiments a simple working model will be introduced. It is mainly based on the functional model of face recognition by Bruce and Young (1986) which has already been described above. The working model tries to integrate the two different experimental tasks which will be used in the conducted experiments with dependent measures. In the experimental parts, participants had to perform a two-choice RT task either to discriminate between two different equiprobable facial expressions on separately presented portraits or to discriminate facial familiarity. In the task formerly mentioned, facial familiarity was varied indepenently of expression – that is, half of the presented portraits belonged either to familiar or unfamiliar faces. In the latter task the other dimension – facial expression – was varied independently. Each person was presented with either two or three different facial expressions. Within the working model different functional processes are assumed to possibly be affected (Figure 3).


[page 34↓]

Figure 3. Proposal for a functional working model on which the following experiments are based, illustrated by an expression discrimination task.

Firstly, the presented face has to be perceived and categorized as a face in the so called structural encoding stage. As outlined above, the peak latency of the N170 component reflects the temporal properties for this processing stage. Then the relevant stimulus information has to be extracted in order to perform the discrimination task. Here, the P300 component serves as a functional marker since it is related to the perceptual evaluation of task relevant information. It is worth noting that according to the model by Bruce and Young (1986) the expression analysis and the recognition of facial familiarity are thought to be independent processes functioning in parallel. Thus, depending on the experimental task the other irrelevant dimension (e.g. recognition of familiarity within an expression discrimination task) may still be recognized automatically and in parallel. In addition, it has to be mentioned that the recognition of facial expressions is in most cases somewhat faster than the recognition of facial identity. If the familiarity may affect the expression discrimination in an expression discrimination task this may only be possible when this task is slowed down (cf. Baudouin et al., 2000). In a third assumed stage the response hand has to be selected. The LRP reflects the preparation of a specific response side. After motor preparation and adjustment the overt response is executed.

1.3.2. Hypotheses

The main hypothesis of this dissertation is to find a facilitative interaction between the recognition of facial expressions and of facial familiarity. This would contrast the assumptions of the functional model of face recognition (Bruce & Young, 1986). The interaction should manifest itself in the behavioural data. Admittedly, in Bruce and Young’s model both processes are interlinked through the cognitive system and an interaction is not [page 35↓]excluded per se. Thus, the temporal properties of the two involved processes are an important prerequisite for an interaction. The present dissertation also questions, under which circumstances the predicted facilitative interaction can emerge. In addition, it is attempted to localize the functional processing stage which is connected to the expected facilitative interaction.

To understand the chronometric approach of the experimental parts this logic will be outlined here in more detail. Several possibilities are left by the attempt to detect the functional processing stage on which a facilitative interaction may occur. If faster RTs are present within a specific condition this facilitation should be reflected in the ERPs by shortened peak or onset latencies. An earlier peak of the N170 component would point to the facilitation of early perceptual processing – the structural encoding stage. If task relevant late perceptual processing stages depict the locus of confluence, an earlier P300 peak latency should be present in the facilitated condition. In addition, the peak latencies of all earlier components – in this case the N170 - have to be the same in this condition. Otherwise the facilitative effect of the earlier processing stage may just have propagated from this stage to the next one. Possibly, the facilitative effect due to an interaction between one process and the other may act on the response selection stage. In this case, shorter onset latencies of the S-LRP should be present within the facilitated condition. Again, this interpretation is only valid if all earlier components do not differ between the facilitated and non facilitated condition. The last functional locus of interaction to be proposed is motor preparation beyond hand selection. If this process is facilitated the interval between LRP onset and overt response (LRP-R) should be shortened.

Since the empirical backround, the main hypothesis, and the principle logic of the experimental paradigm are now outlined, a short overview is given about the following experimental parts. The experimental Part I will elucidate the question whether there is an interaction between facial familiarity and the discrimination of facial expressions. Experimental Part II reserves the question and asks whether there is an interaction between facial expressions and the discrimination of facial familiarity. After the experimental parts all results will be discussed thoroughly in a final general discussion and a perspective for further research will be given.


© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.
DiML DTD Version 3.0Zertifizierter Dokumentenserver
der Humboldt-Universität zu Berlin
HTML generated:
21.09.2004