چه زمانی به چنگ آورزدن توسط خطای حسی مولر-لایر تحت تاثیر قرار می گیرد؟: بررسی کمی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|77634||2009||13 صفحه PDF||سفارش دهید||13668 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 47, Issue 6, May 2009, Pages 1421–1433
Milner and Goodale (1995) [Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford, UK: Oxford University Press] proposed a functional division of labor between vision-for-perception and vision-for-action. Their proposal is supported by neuropsychological, brain-imaging, and psychophysical evidence. However, it has remained controversial in the prediction that actions are not affected by visual illusions. Following up on a related review on pointing (see Bruno et al., 2008 [Bruno, N., Bernardis, P., & Gentilucci, M. (2008). Visually guided pointing, the Müller-Lyer illusion, and the functional interpretation of the dorsal-ventral split: Conclusions from 33 independent studies. Neuroscience and Biobehavioral Reviews, 32(3), 423–437]), here we re-analyze 18 studies on grasping objects embedded in the Müller-Lyer (ML) illusion. We find that median percent effects across studies are indeed larger for perceptual than for grasping measures. However, almost all grasping effects are larger than zero and the two distributions show substantial overlap and variability. A fine-grained analysis reveals that critical roles in accounting for this variability are played by the informational basis for guiding the action, by the number of trials per condition of the experiment, and by the angle of the illusion fins. When all these factors are considered together, the data support a difference between grasping and perception only when online visual feedback is available during movement. Thus, unlike pointing, grasping studies of the Müller-Lyer (ML) illusion suggest that the perceptual and motor effects of the illusion differ only because of online, feedback-driven corrections, and do not appear to support independent spatial representations for vision-for-action and vision-for-perception.