غیر سفت و سخت، اما نه سفت و سخت، حرکت تداخل با پردازش اطلاعات به صورت ساختاری در پروزوپاگنوزیا تکاملی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37917||2015||15 صفحه PDF||سفارش دهید||14015 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 70, April 2015, Pages 281–295
Abstract There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity.
Introduction Studies investigating the processing of both familiar and unfamiliar faces have overwhelmingly relied on the use of static images as stimuli. These studies have consistently revealed that, relative to familiar face recognition, the recognition, or even perception, of newly learned facial identities is poor and heavily dependant on the availability of image-based features from the original studied image (Bindemann and Sandford, 2011, Bruce, 1982, Longmore et al., 2008 and Newell et al., 1999). Specifically, recognition declines as a consequence of changes in the visual appearance of the face from the learned version, such as changes in viewpoint or expression (review see Hancock et al., 2000). However, it is important to consider that faces are inherently dynamic, rather than static stimuli and are most often seen moving outside the laboratory setting. Moreover, dynamic changes can occur across a range of viewpoints and expressions from one moment to the next. The use of static images in studies of face perception have helped us understand the invariant features of a face (i.e. static form cues which remain stable over time) that are important for recognition and to determine how these features sustain face recognition whilst ignoring changes which occur through movement (Bruce and Young, 1986). Indeed, it was assumed that motion in the face was relevant for the communication of social signals only, such as expression or speech, and less relevant for face recognition (Bruce and Young, 1986). Yet, recent evidence suggests that dynamic cues can enhance, rather than detract from, the processing of facial identity in neurotypical younger adults (Lander and Bruce, 2000, Lander and Bruce, 2003, Lander et al., 1999, Lander and Chuang, 2005, Pilz et al., 2006, Pilz et al., 2009 and Thornton and Kourtzi, 2002; for a recent review see Xiao et al., 2014). However, how exactly motion contributes to the processing of facial identity remains somewhat unclear (O’Toole et al., 2002 and Roark et al., 2003). On the one hand, motion may provide a salient cue for recognition which is processed independently from facial form information; this is referred to as the ‘supplemental information' hypothesis (SIH). Specifically, O’Toole et al. (2002) suggested that facial motion may provide a unique ‘dynamic identity signature’ to a person's facial identity which can act as a stand-alone, i.e. supplemental, cue for the purpose of recognition. O’Toole et al. (2002) proposed that these dynamic signatures are likely processed in dorsal areas of the face processing network, such as the posterior Superior Temporal Sulcus (pSTS) (Haxby et al., 2000). The dynamic signatures are learned through repeated exposure to the moving face (e.g. during speech or facial expressions) and thus the SIH argues that facial motion may be more relevant for the recognition of familiar faces, in which categorical representations are established in face memory (Bülthoff and Newell, 2004 and Bülthoff and Newell, 2006) rather than the learning of new facial identities (O’Toole et al., 2002 and Roark et al., 2003). The alternative, or complimentary, proposal is that motion is combined with relevant visual form information to create a more robust representation of the face in memory, and this is referred to as the ‘representation enhancement' hypothesis (REH; O’Toole et al., 2002). According to this approach, motion may provide additional information about the 3D structure of the face. This enhanced structural representation may assist in perceptual constancy by maintaining the ability to recognise the identity of the face across changes in viewpoint or facial expression. Therefore, unlike the SIH, which assumes familiarity with a facial identity, the REH suggests that facial motion may also benefit the learning of new or unfamiliar facial identities. Several studies have provided evidence in favour of the idea that motion information supplements the representation of faces in memory thus providing a unique cue for face perception. Specifically, studies with neurotypical younger adults have consistently demonstrated that motion benefits familiar face recognition when available form cues are degraded, e.g. through pixelation or blurring of the image (Knight and Johnston, 1997, Lander et al., 1999 and Lander and Chuang, 2005). This dynamic enhancement of face perception appears to be modulated by the type of facial motion, being more pronounced for non-rigid than rigid motion and also by the degree of idiosyncrasy in the non-rigid motion across individuals (Knappmeyer et al., 2003 and Lander and Chuang, 2005). Non-rigid motion refers to internal deformations of the face which occur through speech or expressive gestures, while rigid motion refers to full translations of the head, such as when the face moves from side to side (Bülthoff et al., 2011, Knappmeyer et al., 2003, O’Toole et al., 2002 and Roark et al., 2003). Thus face motion (i.e. non-rigid), which was once assumed to convey purely social information, can provide a supplemental cue to support facial identity processing. Hill and Johnston (2001) also provided evidence in support of the SIH using a novel paradigm to assess the role of facial motion in discriminating between unfamiliar facial identities. In that study, the authors used motion capture to animate an ‘average face’ with different dynamic facial identities. They observed that although the face stimuli provided no reliable visual form cues, observers performed above chance level in categorizing and discriminating between facial identities based on the motion cues alone Thus, although this study provides evidence in support of the SIH, demonstrating that facial motion can provide a relevant, independent cue for face perception, the results also suggest that dynamic cues are rapidly acquired and are relevant for distinguishing and also learning new facial identities (see also Steede et al., 2007a and Steede et al., 2007b. One additional avenue of research which has also provided support for the SIH comes from a small number of studies which have examined dynamic face processing in individuals with prosopagnosia. Prosopagnosia is a disorder characterised by the inability to recognise the identity of an individual from their face alone. Although the disorder can result from explicit insult to an already established face processing system (Bodamer, 1947 and Farah, 1990), more recent evidence has highlighted that atypical face recognition can emerge during development i.e. developmental prosopagnosia (DP) (Duchaine, Germine, and Nakayama, 2007; Bradley Duchaine, 2008; Susilo and Duchaine, 2013). To date, prosopagnosia has been extensively studied through the use of static face images. These studies have demonstrated that the processing of static structural form cues in the face is significantly impaired in such individuals (e.g. Bowles et al., 2009; Duchaine et al., 2007; Duchaine and Nakayama, 2005; Németh et al., 2014; Palermo et al., 2011; Towler et al., 2012). Interestingly, although the encoding of structural information is impaired, a small number of studies have found that the ability to extract idiosyncratic motion cues to support face processing may remain, to some extent, preserved in prosopagnosia (Lander et al., 2004, Longmore and Tree, 2013 and Steede et al., 2007b). For example, Lander et al., (2004) observed that HJA (who acquired prosopagnosia and visual agnosia following occipito-temporal damage) was unable to use dynamic cues to support familiar face recognition or the learning of new facial identities. Nevertheless, the authors reported that HJA could match the identity of sequentially presented dynamic faces in comparison to static faces. In other words, HJA could use dynamic information for the purpose of face perception but not face recognition. This performance in matching dynamic faces is consistent with studies which reported that HJA was not impaired at matching face parts, relative to whole faces (Boutsen and Humphreys, 2002). Previous studies have suggested that motion perception was unimpaired in HJA (Humphreys et al., 1993), therefore HJA may have been able to exploit motion information, independently from facial form, for the purpose of face matching. Other evidence from studies involving developmental prosopagnosics has largely supported Lander and colleagues original findings. Specifically, although evidence for a benefit for motion on face memory has been inconsistent (Esins et al., 2014 and Longmore and Tree, 2013; but see Steede et al., 2007b), the ability to match moving faces has been reliably observed. For example, Longmore and Tree (2013) reported better face matching performance across changes in viewpoint in individuals with developmental prosopagnosia when the same idiosyncratic non-rigid motion was available in the face stimuli during the learning and test conditions, compared to when all images were static in nature. In addition, Steede et al. (2007b) observed that CS, a developmental prosopagnosic, could reliably discriminate between facial identities when only motion cues in the face were available, irrespective of whether the motion was rigid or non-rigid. Taken together these results suggest that the ability to extract motion information for the purpose of perceiving unfamiliar faces (i.e. to match and discriminate newly learned facial identities) may remain relatively intact in cases of DP. However, the evidence suggests that facial motion may not facilitate memory for faces in DP, suggesting that facial motion may be difficult to represent in this cohort (Longmore and Tree, 2013). In contrast, supporting evidence for the REH has been less consistent. On the one hand, a number of face matching (Pilz et al., 2006 and Thornton and Kourtzi, 2002) and face memory (Christie and Bruce, 1998, Lander and Bruce, 2003 and Pike et al., 1997) studies in younger adults have revealed that learning a face in motion, relative to a single static image, can enhance subsequent recognition of a novel static image of the face. However, when structural information has been equated across both static and dynamic learning conditions (i.e. presenting multiple static images rather than the motion sequence of image frames) this enhancement from dynamic information has often been reduced (Christie and Bruce, 1998 and Lander and Bruce, 2003; but see Pike et al., 1997). It has therefore been argued that the observed benefit on face processing from ‘dynamic’ face cues may, to some extent, be mediated by the additional facial form cues available in the motion sequence, rather than the dynamic information enhancing the encoding of available form cues (Lander and Bruce, 2003). Yet, we reported better perception in older adults of unfamiliar faces when the face was initially learned moving relative to a sequence of static images (Maguinness and Newell, 2014). Ageing is often associated with a decline in the ability to perceive and recognise unfamiliar faces from static cues alone (e.g. Habak et al., 2008; Lee et al., 2011). However, we found that when an unfamiliar face was presented moving during learning, this motion subsequently benefited the matching of novel static images of that face. Thus, the performance enhancement observed for learning faces in motion suggests that facial motion may combine with available form information to create a more robust structural representation of the face, i.e. preserving identity matching across image transformations of the face (REH), at least over short term intervals. This demonstrates that motion does not only act as a source of information which is extracted independent of facial form information (SIH). As mentioned previously, the processing of motion for the purpose of face perception may be underpinned by unique neural substrates such as the pSTS. The pSTS is considered an area within the face processing network (see Haxby et al., 2000) which is primarily concerned with processing the changeable aspects of the face (Haxby et al., 2000, Pitcher et al., 2011, Pitcher et al., 2014 and Schultz and Pilz, 2009). This area exhibits enhanced cortical activation to faces moving in both a non-rigid (Fox et al., 2009, Pitcher et al., 2011 and Schultz and Pilz, 2009) and a rigid manner (Lee et al., 2010a), compared to static face images. However, recent neuroimaging evidence also suggests that interactions between form and motion areas in the face processing network may be more pronounced than previously assumed (Furl et al., 2014, Schultz and Pilz, 2009 and Schultz et al., 2013). For example, Schultz and Pilz (2009) observed that classically defined static form processing areas (e.g. fusiform gyrus) also exhibited an enhanced functional activation profile (albeit to a lesser extent than STS) for dynamic compared to static faces, leading the authors to conclude that the integration of facial form and motion likely occurs in both ventral and dorsal areas of the face processing network. These interactions lend support to the REH. However, it has been argued by Pitcher, Duchaine, and Walsh (2014) that the dissociation between static and dynamic face processing may be strict. Specifically, they observed that disrupting the processing of the occipital face area (OFA) in neurotypical individuals, through the delivery of thetabusrt transcranial magnetic stimulation (TBS), did not affect the functional response profile in pSTS to moving face images, yet it did reduce the pSTS response profile to static face images. This preserved activation in pSTS appears to be consistent with the SIH, suggesting that dynamic facial information may engage pSTS independent of available form cues. However, as there were no task demands in that study (but see Pitcher, 2014) it is difficult to conclude what specific aspects of face processing, e.g. identity processing, may have been sub-served through the functional activity in pSTS. In summary, there is some evidence to suggest that facial motion is an important source of information for face processing, with studies suggesting that facial motion can improve the perception and recognition of both familiar and unfamiliar faces. In the case of prosopagnosia, there is some suggestion that access to facial motion processing may be preserved, suggesting that motion may support identity processing in DP (Longmore and Tree, 2013 and Steede et al., 2007a). Thus, in the same way that dynamic cues can support face recognition in neurotypical younger adults when available form cues are degraded (e.g. Lander et al., 1999) this processing of independent sources of information may benefit face perception in prosopagnosia. Taken together, these results support a possible dissociation between the mechanisms involved in static and dynamic face processing (SIH). However, the results from Maguinness and Newell (2014) and a number of previous studies (Otsuka et al., 2009, Pilz et al., 2006 and Thornton and Kourtzi, 2002) also demonstrate that learning a face in motion can enhance the structural (i.e. form) representation of the face, suggesting that static and dynamic cues are likely to interact when encoding facial identity, in line with the representation enhancement hypothesis (REH). It is therefore, unclear whether motion information will enhance or interfere with the processing of facial information in individuals with DP since, to the best of our knowledge, no study has explicitly tested how facial motion may affect the representation of facial form in this population. In a series of matching experiments reported below we examined how learning a face in motion, relative to a static sequence presentation, may affect the subsequent ability to match an image of that face across changes in viewpoint and expression. We tested performance on these tasks in two developmental prosopagnosics, UM and PL, and compared their performance to a group of age-matched, neurotypical individuals (i.e. the control group). We used a face-matching paradigm as this has been previously shown to be sensitive to the effects of motion on face learning in neurotypical, younger (Pilz et al., 2006 and Thornton and Kourtzi, 2002) and older (Maguinness and Newell, 2014) adults, as well as being sensitive to detecting idiosyncratic motion processing in prosopagnosia (Lander et al., 2004 and Longmore and Tree, 2013). In separate experiments, we investigated how rigid motion (Experiment 1) and non-rigid motion (Experiment 2) may affect face perception using the same, immediate matching paradigm for both categories of motion. We also examined if (non-rigid) motion cues in the face can be used to support the perception of social information from a face, rather than identity, in DP (Experiment 3).