دانلود مقاله ISI انگلیسی شماره 24686
ترجمه فارسی عنوان مقاله

رگرسیون های خطی دوتایی برای تصویر تک برچسبی از طریق تشخیص چهره ی شخص

عنوان انگلیسی
Double linear regressions for single labeled image per person face recognition
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
24686 2014 12 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Pattern Recognition, Volume 47, Issue 4, April 2014, Pages 1547–1558

فهرست مطالب ترجمه فارسی
چکیده
واژگان کلیدی
1.مقدمه
2. مرور مختصری بر LDA و RDA
2.1 LDA 
2.2. RDA 
3. انتشار برچسب مبتنی بر فرضیه زیرفضا
شکل 1. فرایند یادگیری ساختار نمایش پراکنده
3.1 بر اساس طبقه بندی بر اساس فرض فرض زیرمجموعه
3.2. یادگیری ساختار نمایش پراکنده
3.3. سازمان دهنده حفظ پراکندگی 
3.4. SDR مبتنی بر رگرسیون های خطی دوگانه
3.5. DLR هسته
شکل 2. (a) دسته بندی روش های کاهش بعد-چندی مربوطه طبق حالت نظارتی و (b) دسته بندی روش های کاش بعد-چندی مربوطه طبق ساختار هندسی در نظر گرفته شده. 
3.6. تجزیه و تحلیل پیچیدگی محاسباتی 
4. مقایسه با روش های مربوطه 
شکل 3. (a) برخی از چهره های اولین فرد در پایگاه داده PIE، (b) چهره های جزئی اولین شخص در پایگاه Extended Yale B و (c) تمامی 14 چهره اولین شخص در پایگاه داده AR. 
5. آزمایش ها 
5.1. تعریف پایگاه داده ها
5.2. مقداردهی تجربی
5.3. نتایج تجربی و مباحث 
5.4. بررسی های بیش تر الگوریتم DLR
6. نتیجه گیری ها و کارهای آینده
ترجمه کلمات کلیدی
کاهش بعد چندی نیم-نظارتی، انتشار برچسب، نمایش پراکنده، رگرسیون های خطی، تجزیه و تحلیل تشخیصی خطی، تشخیص چهره
کلمات کلیدی انگلیسی
Semi-supervised dimensionality reduction, Label propagation, Sparse representation, Linear regressions, Linear discriminant analysis, Face recognition,
ترجمه چکیده
اخیرا ساختار اصلی نمایش پراکنده در داده های بعد بالا توجه قابل ملاحظه ای در تشخیص الگو و دید کامپیوتری به سمت خود جلب کرده است. در این مقاله ما یک روش کاهش بعد-چندی نیمه نظارتی نوین (SDR) به نام رگرسیون های خطی دوتایی (DLR) جهت مدیریت مسئله تصویر تک برچسبی از طریق تشخیص چهره شخص (SLIP) ارائه می دهیم. DLR بطور همزمان به دنبال بهترین زیرفضای متمایز کننده بوده و ساختار نمایش پراکنده را حفظ می کند. بویژه، یک روش انتشار برچسب مبتنی بر فرضیه ریز فضایی (SLAP) که با استفاده از رگرسیون های خطی (LR) بدست آمده است، ابتدا جهت انتشار اطلاعات برچسب به داده های بدون برچسب ارائه شده است. سپس، براساس مجموعه داده برچسب زنی شده انتشار یافته، مولفه سازماندهی نمایش پراکنده بوسیله رگرسیون های خطی (LR) ساخته شده است. در نهایت، DLR هم بازده تمایز و هم ساختار نمایش پراکنده را با استفاده از مولفه سازماندهی نمایش پراکنده ی یاد گرفته شده بعنوان مولفه سازماندهی تجزیه و تحلیل تشخیصی خطی (LDA) در نظر می گیرد. نتایج تجربی انگیزه بخش و گسترده ی مربوط به سه پایگاه داده صورت در دسترس عموم (CMU PIE، Extended Yale B و AR) حاکی از موثر بودن روش پیشنهادی می باشند.
پیش نمایش مقاله
پیش نمایش مقاله  رگرسیون های خطی دوتایی برای تصویر تک برچسبی از طریق تشخیص چهره ی شخص

چکیده انگلیسی

• DLR seeks the best discriminating subspace and preserves the sparse structure.• DLR uses label information to learn a more discriminative sparse structure.• Sparse coefficient vector is quickly computed by class specific linear regression.• The difficulty of selecting graph construction parameters is avoided in DLR.• Promising experimental results on three public face datasets are presented.Recently the underlying sparse representation structure in high dimensional data has received considerable attention in pattern recognition and computer vision. In this paper, we propose a novel semi-supervised dimensionality reduction (SDR) method, named Double Linear Regressions (DLR), to tackle the Single Labeled Image per Person (SLIP) face recognition problem. DLR simultaneously seeks the best discriminating subspace and preserves the sparse representation structure. Specifically, a Subspace Assumption based Label Propagation (SALP) method, which is accomplished using Linear Regressions (LR), is first presented to propagate the label information to the unlabeled data. Then, based on the propagated labeled dataset, a sparse representation regularization term is constructed via Linear Regressions (LR). Finally, DLR takes into account both the discriminating efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of Linear Discriminant Analysis (LDA). The extensive and encouraging experimental results on three publicly available face databases (CMU PIE, Extended Yale B and AR) demonstrate the effectiveness of the proposed method.

مقدمه انگلیسی

In many fields of scientific research such as face recognition [1], bioinformatics [2], and information retrieval [3], the data are usually presented in a very high dimensional form. This make the researchers confront with the problem of “the curse of dimensionality” [4], which limits the application of many practical technologies due to the heavy computational cost in high dimensional space, and deteriorates the performance of model estimation when the number of training samples are small compared to the number of features. In practice, dimensionality reduction has been employed as an effective way to deal with “the curse of dimensionality”. In the past years, a variety of dimensionality reduction methods have been proposed [5], [6], [7], [8], [9] and [10]. According to the geometric structure considered, the existing dimensionality reduction methods can be categorized into three types: global structure based methods, local neighborhood structure based methods, and the recently proposed sparse representation structure [11] and [12] based methods. Two classical dimensionality reduction methods Principle Component Analysis (PCA) [13] and Linear Discriminant Analysis (LDA) [14] belong to global structure based methods. In the field of face recognition, they are known as “Eigenfaces” [15] and “Fisherfaces” [16]. Two popular local neighborhood structure based methods are Locality Preserving Projections (LPP) [17] and Neighborhood Preserving Embedding (NPE) [18]. LPP and NPE are named “Laplacianfaces” [19] and “NPEfaces” [18] in face recognition. The representative sparse representation structure based methods include Sparsity Preserving Projections (SPP) [20], Sparsity Preserving Discriminant Analysis (SPDA) [21] and Fast Fisher Sparsity Preserving Projections (FFSPP) [22]. They have also been successfully applied to face recognition. In order to deal with the nonlinear structure in data, most of the above linear dimensionality reduction methods have been extended to their kernelized versions which perform in Reproducing Kernel Hilbert Space (RKHS) [23]. Kernel PCA (KPCA) [24] and Kernel LDA (KLDA) [25] are the nonlinear dimensionality reduction methods corresponding to PCA and LDA. Kernel LPP (KLPP) [17] and [26] and Kernel NPE (KNPE) [27] are the kernelized versions of LPP and NPE. The nonlinear version of SPDA is Kernel SPDA [21]. One of the major challenges to appearance-based face recognition is recognition from a single training image [28] and [29]. This problem is called “one sample per person” problem: given a stored database of faces, the goal is to identify a person from the database later in time in any different and unpredictable poses, lighting, etc. from just one image per person [28]. Under many practical scenarios, such as law enforcement, driver license and passport card identification, in which there is usually only one labeled sample per person available, the classical appearance-based methods including Eigenfaces and Fisherfaces suffer big performance drop or tend to fail to work. LDA fails to work since the within-class scatter matrix degenerates to a zero matrix when only one sample per person is available. Zhao et al. [30] suggested replacing the within-class scatter matrix with an identity matrix to make LDA work in this setting, although the performance of this Remedied LDA (ReLDA) is still not satisfying. Due to its importance and difficulty, one sample per person problem has aroused lots of interest in face recognition community. To attack this problem, many ad hoc techniques have been developed, including synthesizing virtual samples [31] and [32], localizing the single training image [33], probabilistic matching [34] and neural network methods [35]. More details on single training image problem can be found in a recent survey [28]. As the fast development of the digital photography industry, it is possible to have a large set of unlabeled images. In this background, a more natural and promising way to attack one labeled sample per person problem is semi-supervised dimensionality reduction (SDR). Semi-supervised Discriminant Analysis (SDA) [29] is one SDR method which has been successfully applied to single labeled image per person face recognition. SDA first learns the local neighborhood structure using the unlabeled data and then uses the learned local neighborhood structure to regularize LDA to obtain a discriminant function which is as smooth as possible on the data manifold. Laplacian LDA (LapLDA) [36], Semi-supervised LDA (SSLDA) [37], and Semi-supervised Maximum Margin Criterion (SSMMC) [37] are all reported semi-supervised dimensionality reduction methods which can improve the performance of their supervised counterparts like LDA and Maximum Margin Criterion (MMC) [38]. These methods consider the local neighborhood structure and can be unified under the graph embedding framework [37] and [39]. Despite the success of these SDR methods, there are still some disadvantages: (1) these SDR methods are based on the manifold assumption which requires sufficiently many samples to characterize the data manifold [40]; (2) the adjacency graphs constructed in these methods are artificially defined, which brings the difficulty of parameter selection of neighborhood size and edge weights. To resolve these issues, Sparsity Preserving Discriminant Analysis (SPDA) [21] was presented. SPDA first learns the sparse representation structure through solving nn (number of training samples) 1ℓℓ1 norm optimization problems, and then uses the learned sparse representation structure to regularize LDA. SPDA has achieved a good performance on single labeled image per person face recognition, but it still has some shortages: (1) it is computationally expensive since nn1ℓℓ1 norm optimization problems need to be solved in learning the sparse representation structure and (2) the label information is not taken advantage of in learning the sparse representation structure. To tackle the above problems, we propose a novel SDR method, named Double Linear Regressions (DLR), which simultaneously seeks the best discriminating subspace and preserves the sparse representation structure. More specifically, a Subspace Assumption Based Label Propagation (SALP) method, which is accomplished using Linear Regressions (LR), is first presented to propagate the label information to the unlabeled data. Then, based on the propagated labeled dataset, a sparse representation regularization term is constructed via Linear Regressions (LR). Finally, DLR takes into account both the discriminating efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of linear discriminant analysis. It is worthwhile to highlight some aspects of DLR as follows: (1) DLR is a novel semi-supervised dimensionality reduction method aiming at simultaneously seeking the best discriminating subspace and preserving the sparse representation structure. (2) DLR can obtain the sparse representation structure via nn small class specific linear regressions. Thus, it is more time efficient than SPDA. (3) In DLR, label information is first propagated to all the training set. Then it is used in learning a more discriminative sparse representation structure. (4) Unlike SDA, there are no graph construction parameters in DLR. The difficulty of selecting these parameters is avoided. (5) Our proposed label propagation method SALP is quite general. It can be combined with other graph-based SDR methods to construct a more discriminative graph. The rest of the paper is organized as follows. Section 2 gives a brief review of LDA and RDA. DLR is proposed in Section 3. DLR is compared with some related works in Section 4. The experimental results and discussions are presented in Section 5. Finally, Section 6 gives some concluding remarks and future work.

نتیجه گیری انگلیسی

In this paper, we proposed a novel semi-supervised dimensionality reduction method, named Double Linear Regressions (DLR), to attack the single labeled image per person face recognition problem. DLR simultaneously seeks the best discriminating subspace and preserves the sparse representation structure. A Subspace Assumption based Label Propagation (SALP) method, which is accomplished using Linear Regressions (LR), is first presented to propagate the label information to the unlabeled data. Then, based on the propagated labeled dataset, a sparse representation regularization term is constructed via Linear Regressions (LR). Finally, DLR takes into account both the discriminating efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of linear discriminant analysis. The extensive experiments on three publicly available face databases demonstrate the promising performance of our proposed DLR method, from which we also find that the proposed DLR method can better employ unlabeled samples than SDA and SPDA, and has high parameter stability. According to the experimental results, our proposed DLR outperforms all the other compared methods when unlabeled samples are sufficient (E2). However, when unlabeled samples are scarce (E1), the performance of DLR is not good enough on PIE and AR. This is because when unlabeled samples are too few, the subspace structure may not be well captured. This will deteriorate the performance of label propagation and learning the sparse representation structure. How to tackle this problem is one of our future focuses. One possible strategy is to combine the collaborative representation mechanism [59] into DLR when unlabeled samples are scarce. Another interesting direction is to design/explore other label propagation methods which can propagate the label with more accuracy because we find it is very significant if we can obtain more reliable labels in the label propagation phase.