عصب روانشناختی هویت صورت و حالت چهره در کودکان مبتلا به عقب ماندگی ذهنی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37660||2005||8 صفحه PDF||سفارش دهید||3126 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Research in Developmental Disabilities, Volume 26, Issue 1, January–February 2005, Pages 33–40
Abstract We indirectly determined how children with mental retardation analyze facial identity and facial expression, and if these analyses of identity and expression were controlled by independent cognitive processes. In a reaction time study, 20 children with mild mental retardation were required to determine if simultaneously presented photographs of pairs of faces were pictures of the same person or of different people (identity matching), or to determine if the pairs of faces showed the same expressions or different expressions (expression matching). Faces of familiar and unfamiliar people were used as stimuli. For identity matching, reaction times were faster for familiar faces than for unfamiliar faces. For expression matching, there was no difference between familiar and unfamiliar faces. These results are consistent with neuropsychological findings from the general population indicating that analyses of facial expressions proceed independently from processes involved in establishing a person’s identity. Our results suggest that the basic neuropsychological mechanisms that underlie cognitive processing of facial identity and facial expressions in children with mental retardation may be similar to those of people in the general population.
1. Introduction There is a growing interest in the neuropsychology of facial identity and facial expression. Early research in the neuropsychology of facial recognition and coding of expressions led scientists to conclude that there was an interrelationship between the recognition of faces and the interpretation of facial expressions. Contemporary researchers, however, have provided evidence that there are in operation specialized, independent neuronal pathways that direct the processing of visual information and affective coding. Researchers investigating the cognitive processes involved in facial recognition and the interpretation of facial expressions have produced compelling theories regarding brain operations. For example, studying cognitive processing at the neuronal level, Perrett et al., 1984 and Perrett et al., 1986 found that the neurons that respond to facial expression do not respond to facial identification, and vice versa. Additional support for the belief in the independence of identity and expression recognition functions comes from studies of people with brain injuries. For example, several researchers have examined the effects of brain injuries on a person’s ability to recognize faces and distinguish emotions. Studies of people with unilateral cerebral lesions (Bowers, Bauer, Coslett, & Heilman, 1985; Cicone, Wapner, & Gardner, 1980; Etcoff, 1984), right hemisphere damage (Mandal, Asthana, & Maitra, 1998), and nonlocalized brain damage (Kurucz & Feldmar, 1979; Kurucz, Feldmar, & Werner, 1979) support the conclusion that the mechanisms involved in identification and expression recognition are indeed separate. In an early model of face processing, Hay and Young (1982) suggested that facial recognition and facial expressions may be based on separate cognitive processing systems. Later, Bruce and Young (1986) developed a functional model of the perceptual and cognitive processes involved in facial recognition and provided further details as to how information flows within the cognitive system. This model of face processing is based on the theory that analysis of facial expression, facial speech analysis, and directed visual processing proceed independently from the analysis of facial identity. The model proposes that faces are structurally encoded and then stored for subsequent retrieval. Structural encoding takes place in two ways. The first process is creation of view-centered descriptions, which are used for expression analysis, facial speech analysis, and directed visual processing. The second process is the generation of expression-independent descriptions, which are used for directed visual processing, face recognition units, person identity nodes, and name generation. Expression analysis, facial speech analysis, directed visual processing, person identity nodes, and face recognition units represent separate components of the cognitive system for processing faces. This model also suggests familiarity is established by identity-specific semantic and name codes, and is determined by face recognition units which do not directly affect expression analysis. Therefore, when reaction time is measured for expression matching, familiarity should have no bearing on the outcome. However, familiarity should influence reaction times for identity recognition because the processes involved in facial recognition are directly connected to familiarity of the face. Thus, this model relies on a dual mechanism for analyzing facial expressions and facial identity. In a test of this dual-mechanism hypothesis, Young, McWeeny, Hay, and Ellis (1986) measured reaction times of subjects from the general population for identity matching and expression matching tasks. Subjects were simultaneously presented with pairs of photographs of familiar and unfamiliar people, and were asked to determine if the pairs were of the same person or different persons (identity matching), or if the photographs depicted the same expression or different expressions (expression matching). They noted that if the dual-mechanism hypothesis of Bruce and Young (1986) was valid, then the reaction times would be faster for familiar than for unfamiliar faces in the identity matching tasks but would be the same for familiar and unfamiliar faces in expression analysis. As predicted by the Bruce and Young model, they found faster reaction times for identity matching when compared to expression matching, faster reaction times for same as opposed to different pairs of faces, and faster reaction times for familiar than for unfamiliar faces. Subsequent research with subjects from the general population as well as subjects with various organic and neurological disorders confirmed the predictions of the Bruce and Young model (Wacholtz, 1996). For example, Della Sala, Muggia, Spinnler, and Zuffi (1995) tested the model and reported dissociations between the processing of familiar and unfamiliar faces by people with Alzheimer’s disease. They concluded that their results supported Bruce and Young’s (1986) hypothesis that distinct pathways are involved in the processing of familiar and unfamiliar faces. People with mental retardation have been shown to have problems in correctly recognizing facial expressions of emotion (Gray, Fraser, & Leudar, 1983; McAlpine, Kendall, & Singh, 1991; Rojahn, Lederer, & Tasse, 1995). However, none of these studies investigated the underlying cognitive processes that may be involved. Given the extensive research supporting the Bruce and Young (1986) model for face processing, we wondered if the same or similar cognitive processing of facial expressions and facial identity would be found in children with mental retardation as people in the general population. Therefore, we investigated the independence of the cognitive processing mechanisms involved in the analysis of facial expressions and facial identity using a reaction time methodology. We measured the reaction times when children with mental retardation responded to simultaneously presented pairs of photographs of familiar or unfamiliar faces with respect to identity (same person or different person) or expression (same expression or different expression). Based on the Bruce and Young model of face processing, we hypothesized that the reaction time for identity matching would be shorter for familiar than for unfamiliar faces, and that the reaction time for expression matching would be the same for familiar and unfamiliar faces.
نتیجه گیری انگلیسی
. Results Response latency was the first dependent variable of interest, defined as the time elapsed between the presentation of the stimuli and the subject’s response. Response latency for each item was entered into the analysis, allowing for the calculation of a separate mean for each cell in the 2×2×2 matrix (Task [expression or identity matching] × Familiarity [familiar or unfamiliar faces] × Response [same expression/person or different expression/person]). Response latency data were entered into a three-factor analysis of variance (ANOVA). Each of the three factors (Task, Familiarity, Response) was a repeated measures factor. The model included analysis of main effects for each of the three factors and of the Task × Familiarity interaction, which was the only interaction related to our hypotheses. The ANOVA yielded significant main effects for each of the factors with a smaller mean response latency for identity matching than for expression matching, F(1,19) = 3782.67, P < 0.0001, a smaller mean response latency for familiar than for unfamiliar faces, F(1,19) = 91.08, P < 0.0001, a smaller mean response latency for pairs that were the same than for pairs that were different, F(1,19) = 72.09, P < 0.0001. Means for each of the cells in the matrix are presented in Table 2. Table 2. Mean latency (in seconds) for correct responses to same and different pairs of familiar and unfamiliar faces in identity and expression matching tasks Identity matching Expression matching Familiar faces Unfamiliar faces Familiar faces Unfamiliar faces Same pairs 1.77 1.99 2.74 2.77 Different pairs 1.91 2.17 2.82 2.83 Overall 1.85 2.08 2.78 2.80 Table options Of more interest to the hypotheses under investigation, however, was the result of the Task × Familiarity interaction. The interaction effect was significant, F(1,19) = 65.30, P < 0.0001). Post hoc least squares means comparisons revealed that pairs of familiar faces and pairs of unfamiliar faces did not differ significantly (P = 30) on the expression task while the related comparison for the identity task demonstrated that familiar face pairs were associated with smaller response latencies than unfamiliar face pairs (P < 0.0001). Similar analyses were conducted with error rates as the dependent variable. Error rates were defined as the percent of trials scored incorrect. The analyses yielded findings virtually identical to those reported for response latency. The ANOVA yielded significant main effects for each of the factors with a smaller mean error rate for identity matching than for expression matching, F(1,19) = 9500.86, P <0.0001, a smaller mean error rate for familiar than for unfamiliar faces, F(1,19) = 1838.23, P <0.0001, and a smaller mean response latency for pairs that were the same than for pairs that were different, F(1,19) = 245.01, P <0.0001. Means for each of the cells in the matrix are presented in Table 3. Table 3. Mean error rates (% incorrect) for responses to same and different pairs of familiar and unfamiliar faces in identity and expression matching tasks Identity matching Expression matching Familiar faces Unfamiliar faces Familiar faces Unfamiliar faces Same pairs 33.64 49.36 64.73 67.39 Different pairs 33.01 57.26 69.82 71.04 Overall 33.33 53.31 67.27 69.21 Table options The Task × Familiarity interaction was, likewise, significant with a pattern of means that matched those reported for the response latency variable. In the case of error rates, means for the familiar and unfamiliar pairs differed significantly on both the identity and expression tasks. However, the magnitude of the difference was much greater for the identity task, yielding the significant interaction effect.