درک هندسه صورت جهانی در اثر وارونگی و پروزوپاگنوزیا
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37877||2003||9 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 41, Issue 12, 2003, Pages 1703–1711
Abstract We investigated how efficiently combinations of positional shifts in facial features were perceived and whether the effects of combinations on the overall geometry of the face were reflected in discriminative performance. We moved the eyes closer together or further apart, and moved the mouth up or down. Trials with combinations of changes to both the mouth and the eyes were contrasted with trials with single changes to either the mouth or the eyes. As a contrast, we also examined combinations of changes in eye colour (brightness) and the same spatial manipulations. In addition, we specifically contrasted spatial combinations that more severely distorted the original triangular relation of the mouth and eyes (e.g. eyes closer and mouth down) to those that better preserved the original aspect ratio (e.g. eyes farther and mouth down). This we termed the “geometric context effect”. We found that combinations of two spatial changes were detected more quickly and accurately by normal subjects viewing upright faces but not when faces were inverted. In contrast, combinations of spatial shifts and eye colour changes showed no advantage over faces with only one type of change. Combinations of spatial changes that distorted overall facial geometry more were detected more efficiently than less distorting combinations, showing that the spatial shifts were perceived in the context of the global facial structure. Again, this was found for upright but not inverted faces. We also tested a prosopagnosic patient, who showed the advantage for two spatial changes over one but lacked this geometric context effect, implying that she did not integrate local spatial information into overall facial structure.
1. Introduction Though all faces share the same basic structure, with only subtle variations distinguishing one from another, most humans recognize familiar faces easily. This ease belies the complexity of both the stimulus and the underlying perceptual process. That process is likely highly specialized above and beyond general perceptual mechanisms involved in object vision, though not necessarily unique to faces (Diamond & Carey, 1986; Gauthier, Behrmann, & Tarr, 1999). Evidence for this specialization derives from functional imaging studies (Haxby, Hoffman, & Gobbini, 2000; Kanwisher, McDermott, & Chun, 1997; Kanwisher, Stanley, & Harris, 1999; Puce, Allison, Gore, & McCarthy, 1995) and observations of patients who recognize objects but not faces (prosopagnosia) (Damasio, Damasio, & van Hoessen, 1982; Hecaen & Angelergues, 1962; McNeil & Warrington, 1993). Studies of the inversion effect also provide support for a specialized processing system (Valentine, 1988 and Yin, 1969): a 180° stimulus rotation reportedly degrades perception and recognition of faces more than that of other stimuli also encountered in a habitual orientation, even when matched for level of discriminability. The nature of the specialized perceptual process used with faces remains elusive, however. The ‘dual-mode hypothesis’ (Bartlett & Searcy, 1993) contrasts a more generic component (serial, feature-by-feature, analytic) process with a non-componential process. The nature of the non-componential process remains uncertain. However, several studies have suggested that it may involve some interactive processing of facial elements or components. Sergent demonstrated that chin shape and the vertical position of the internal facial features interacted in similarity judgments and reaction times in same–different responses for upright but not inverted schematic faces (Sergent, 1984a and Sergent, 1984b), and that this interaction was lacking in one prosopagnosic patient (Sergent & Villemure, 1989). Similarly, others have found that changes in features affected the likelihood of detecting changes in another feature in upright faces but not inverted faces (Farah, Wilson, Drain, & Tanaka, 1998). Effects of facial context have been found also. Rhodes, Brake, and Atkinson (1993) found that recognition of a single altered feature was impaired by inversion if the feature was seen in a face, but not if the feature was presented in isolation. Recognition of features of a schematic face was better when features were presented again in the whole face than in isolation, a difference not found for elements of scrambled faces, inverted faces, or houses (Tanaka & Farah, 1993; Tanaka & Sengco, 1997). Recognition of facial features is better when the features are viewed again in the same face than in different faces; again, this difference is not found with inverted faces or houses (Tanaka & Sengco, 1997). Recognition is also better if the features initially studied were seen in a normal face than in a decomposed face (features separated in the display), inverted face, or inverted decomposed face (Farah, Tanaka, & Drain, 1995). These studies all suggest that the perception of one facial element influences the perception of another in upright but not inverted faces. This interaction is consistent with the assertion that upright faces are perceived as wholes rather than as a collage of individual elements. The assertion that faces are perceived as unified complex structures has implications for the detection of local structural changes in the composition of faces. We and others have shown that processing of the second-order, or coordinate, relations (relative spatial position of features) of a face is impaired by stimulus inversion in normal subjects (Barton, Keenan, & Bass, 2001; Bruce, Doyle, Dench, & Burton, 1991; Cooper & Wojan, 2000; Leder & Bruce, 1998), and that this inversion effect occurs at a perceptual encoding stage rather than at a later retrieval or memory stage (Barton et al., 2001; Freire, Lee, & Symons, 2000). We have also shown that prosopagnosic patients whose lesions involve the right “fusiform face area” are deficient in their perception of these second-order spatial relations in faces (Barton, Press, Keenan, & O’Connor, 2002). Of further interest, though, is how combinations of such changes are perceived. Consider the changes we have used: altered interocular distance and vertical mouth position. If changes in these spatial elements are detected sequentially and independently of each other, then the speed and accuracy of detecting a face in which both are altered should not be better than the best performance in detecting a face with a change in either one or the other (but not both). Parallel processing, on the other hand, should offer some advantage in reaction time to faces with combined changes. This has been shown by others in studies using schematic faces, for example (Sergent, 1984b). However, parallel processing need not imply that the two elements are perceived in the context of a higher level complex structure. To show the effect of context, the combined effects of the changes on overall facial geometry should have some impact upon performance. In our simple example, if we take the eyes and the mouth as defining the apices of a triangle, different combinations of shifts will have different effects upon the geometry of this triangle. For example, decreasing interocular distance and lowering the mouth would create a much narrower, more distorted triangle, whereas the combination of reduced interocular distance and a raised mouth would tend to create a triangle with an aspect ratio more similar to the original face (Fig. 1). Thus, if the spatial relations of a face are perceived in the context of the whole facial structure, the first combination of eye and mouth changes should be more easily detected than the second combination.
نتیجه گیری انگلیسی
Results In both experiments 1 and 2, normal subjects detected the altered face more rapidly and accurately (about 5% advantage) when there were two spatial changes rather than just one (Table 1 and Table 2). However, this was true only for upright faces, not for inverted faces. Normal subjects did not show an advantage for colour/spatial combinations over single changes to either of these elements alone. In experiment 1, this may have been because eye colour was too easily seen, with much better reaction times and error rates than for the spatial changes. However, even when single colour and spatial changes were made more comparable in experiment 2, there was still no combination advantage. Table 1. Results, experiment 1 Upright Inverted Mean S.E. P Mean S.E. P Reaction time (ms) Colour/spatial interaction Single change 2035 1903 Colour + spatial 2019 1972 Difference 17 62 n.s. −69 65 n.s. Spatial/spatial interaction Single spatial 2451 2691 Two spatial 2300 2746 Difference 151 49 <0.003 −55 47 n.s. Geometry effect Most distorted 2232 2747 Least distorted 2368 2744 Difference 136 70 <0.03 3 58 n.s. Accuracy (frequency) Colour/spatial interaction Single change 0.84 0.77 Colour + spatial 0.83 0.74 Difference 0.00 0.02 n.s. −0.03 0.02 n.s. Spatial/spatial interaction Single spatial 0.75 0.57 Two spatial 0.78 0.57 Difference 0.04 0.02 <0.01 0.00 0.01 n.s. Geometry effect Most distorted 0.77 0.60 Least distorted 0.80 0.55 Difference −0.03 0.02 n.s. 0.04 0.03 n.s. Table options Table 2. Results, experiment 2 Upright Inverted Mean S.E. P Mean S.E. P Reaction time (ms) Colour/spatial interaction Single change 2815 2981 Colour + spatial 2646 2799 Difference 169 113 n.s. 182 109 n.s. Spatial/spatial interaction Single spatial 2969 3517 Two spatial 2514 3230 Difference 455 245 <0.047 287 213 n.s. Geometry effect Most distorted 2396 3172 Least distorted 2632 3288 Difference 236 133 <0.05 −116 90 n.s. Accuracy (frequency) Colour/spatial interaction Single change 0.81 0.72 Colour + spatial 0.82 0.74 Difference 0.01 0.02 n.s. 0.02 0.03 n.s. Spatial/spatial interaction Single spatial 0.81 0.64 Two spatial 0.87 0.67 Difference 0.06 0.02 <0.009 0.03 0.03 n.s. Geometry effect Most distorted 0.91 0.67 Least distorted 0.82 0.68 Difference 0.09 0.04 <0.01 −0.01 0.03 n.s. Table options In both experiments 1 and 2, the normal subjects detected the more distorting spatial combinations (eyes in/mouth down, and eyes out/mouth up) more rapidly (by 130–230 ms) than the less distorting combinations (eyes in/mouth up, and eyes out/mouth down) (Table 1 and Table 2). In experiment 2, there was also a nearly 10% advantage in accuracy. Again, this was true only for upright but not inverted faces, even in experiment 2 which used larger shifts in mouth position in inverted faces to compensate for the inversion effect on discriminating mouth position (Barton et al., 2001). Like normal subjects, TS was more accurate and rapid when presented with target faces that had two spatial changes as opposed to one, and again this was true for upright but not inverted faces (Table 3). Paradoxically, TS did show an advantage for colour/spatial combinations in upright but not inverted faces, which normal subjects did not have. Regarding the geometric context effect that the normal group displayed, TS failed to show an advantage for maximally distorting combinations over minimally distorting ones, in either upright or inverted faces. This lack of geometric context effect was significantly different from that of the normal controls for both accuracy (P<0.03) and reaction time (P<0.05) in upright faces. Table 3. Results, subject, TS Upright Inverted Mean S.E. P Mean S.E. P Reaction time (ms) Colour/spatial interaction Single change 2877 3990 Colour + spatial 2645 3828 Difference 232 107 <0.03 161 267 n.s. Spatial/spatial interaction Single spatial 2360 3566 Two spatial 2129 3481 Difference 231 70 <0.006 85 108 n.s. Geometry effect Most distorted 2187 3496 Least distorted 2070 3466 Difference −117 111 n.s.* −327 398 n.s. Accuracy (frequency) Colour/spatial interaction Single change 0.87 0.89 Colour + spatial 0.90 0.90 Difference 0.04 0.02 <0.02 0.01 0.04 n.s. Spatial/spatial interaction Single spatial 0.90 0.90 Two spatial 0.97 0.89 Difference 0.07 0.02 <0.005 −0.01 0.02 n.s. Geometry effect Most distorted 0.98 0.89 Least distorted 0.97 0.89 Difference 0.01 0.02 n.s.* 0.01 0.03 n.s. * Difference score significantly different from controls in experiment 2 (P<0.05).