دانلود مقاله ISI انگلیسی شماره 29618
ترجمه فارسی عنوان مقاله

تصویرسازی فضایی از مکان های رمان بر اساس دگرگونی صحنه بصری

عنوان انگلیسی
Spatial imagery of novel places based on visual scene transformation
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
29618 2012 11 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Cognitive Systems Research, Volume 14, Issue 1, April 2012, Pages 26–36

ترجمه کلمات کلیدی
- هیپوکامپ - شی محل حافظه انجمنی - ناوبری روانیشناخت فضایی
کلمات کلیدی انگلیسی
Hippocampus,Object-place associative memory,Mental navigation,Spatial cognition,
پیش نمایش مقاله
پیش نمایش مقاله  تصویرسازی فضایی از مکان های رمان بر اساس دگرگونی صحنه بصری

چکیده انگلیسی

The hippocampus is known to maintain memories of object-place associations that can produce a scene expectation at a novel viewpoint. To implement such capabilities, the memorized distances and directions of an object from the viewer at a fixed location should be integrated with the imaginary displacement to the new viewpoint. However, neural dynamics of such scene expectation at the novel viewpoint have not been discussed. In this study, we propose a method of coding novel places based on visual scene transformation as a component of the object-place memory in the hippocampus. In this coding, a novel place is represented by a transformed version of a viewer’s scene with imaginary displacement. When the places of individual objects are stored with the coding in the hippocampus, the object’s displacement at the imaginary viewpoint can be evaluated through the comparison of a transformed viewer’s scene with the stored scene. Results of computer experiments demonstrated that the coding successfully produced scene expectation of a three object arrangement at a novel viewpoint. Such the scene expectation was retained even without similarities between the imaginary scene and the real scene at the location, where the imaginary scenes only functioned as indices to denote the topographical relationship between object locations. The results suggest that the hippocampus uses the place coding based on scene transformation and implements the spatial imagery of object-place associations from the novel viewpoint.

مقدمه انگلیسی

The hippocampus has a beautiful cellular organization and a clear functional role in memory (Aggleton and Brown, 1999, O’Keefe and Nadel, 1978 and Squire, 1992). In rodents, the memory in the hippocampus has been characterized as spatial in association with the findings of place cells (O’Keefe & Nadel, 1978), while the memory in the human hippocampus, on the other hand, has been characterized as episodic (Scoville & Milner, 1957) (i.e., the memory of personal experiences in daily life). To have a common understanding of the hippocampus, the object-place memory paradigm has been used to investigate hippocampal memory in humans (Cave and Squire, 1991, King et al., 2002, Smith and Milner, 1981 and Stepankova et al., 2004), monkeys (Gaffan, 1994 and Rolls, 1999) and rodents (Eacott & Norman, 2004). The object-place memory is a simplified version of an episodic memory model consisting of “what”, “where” and ”when” (Tulving, 1983), and the afferent projections to the hippocampus is in good agreement with the functional requirements for the hippocampus to maintain the object-place memory, i.e., the hippocampus receives a convergent projection of the object information from the ventral visual pathway and spatial information from dorsal visual pathway (Suzuki & Amaral, 1994). The object-place memory paradigm is, therefore, available for investigating the neural basis of hippocampal memory (Eichenbaum et al., 2007 and Mishkin et al., 1997).

نتیجه گیری انگلیسی

3.1. Accuracy of the scene at the imaginary viewpoint The accuracy of the scene at the imaginary viewpoint was evaluated by comparison with a scene at the real location. In the experiment, the viewer location was fixed at (50, 70) and the scenes at each imaginary viewpoint were estimated. Fig. 5a shows the viewer’s location in the environment. Fan-shaped plots show the viewer’s scene and the imaginary scene at (150, 130). In the scene transformation, the viewer’s scene is expanded along the eccentricity, causing the visual field size of the scene at the estimated location to appear larger than that of the viewer’s scene. Fig. 5b–d shows the similarities between the scene at the imaginary viewpoint and the scene at the real location. The error of the imaginary scene shown in Fig. 5c is calculated as 0.056, which is smaller than the variance of the wall luminance. Fig. 5e shows the errors in the scenes at each of the imaginary viewpoints. In the widely distributed regions, the errors appear smaller than the variance of the wall luminance. The errors tend to be larger at the locations closer to the walls. In these regions, R/d in Eq. (2) is larger and is, thus, considered to be influenced by the equidistance assumption used in the scene transformation. These results demonstrate that the scene at the imaginary viewpoint is similar to the scene at the real location and these are expected to be available as the code of the novel places.