دانلود مقاله ISI انگلیسی شماره 112807
ترجمه فارسی عنوان مقاله

خودپسندی چندگانه و نمایندگی اسپار بر اساس یک تصویر فوق العاده با وضوح تصویر

عنوان انگلیسی
Multiscale self-similarity and sparse representation based single image super-resolution
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
112807 2017 30 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Neurocomputing, Volume 260, 18 October 2017, Pages 92-103

ترجمه کلمات کلیدی
تنها تصویر فوق العاده رزولوشن، نمایندگی انحصاری، خودپنداره چند منظوره، تطبیق ضریب انعکاسی،
کلمات کلیدی انگلیسی
Single image super-resolution; Sparse representation; Multiscale self-similarity; Sparse coefficient alignment;
پیش نمایش مقاله
پیش نمایش مقاله  خودپسندی چندگانه و نمایندگی اسپار بر اساس یک تصویر فوق العاده با وضوح تصویر

چکیده انگلیسی

Recent research has demonstrated that the performance of sparse representation based methods for single image super-resolution (SISR) reconstruction relies strongly on the degree of accuracy of sparse coding coefficients, and accordingly several more accurate models have been developed to overcome it by exploiting the nonlocal patch redundancy within the observed image. However, the capability of those models may be limited as they fail to simultaneously consider the redundant information both within the same scale and across multiple scales. Thus, in this paper, an improved SISR reconstruction method is proposed, in which a compensative pair of regularization terms defined by l1-norm is first constructed by taking advantage of the multiscale self-similarity. Then the calculated sparse coefficients are further aligned to this pair of standards in order to suppress sparse coding noise, and consequently result in more faithful recoveries. Finally, based on conventional iterative shrinkage-thresholding algorithm, a local-to-global and coarse-to-fine mathematic implementation is established to solve the proposed model effectively. Extensive experiments on both synthetic and real images demonstrate that our proposed method leads to a promising SISR performance and surpasses the recently published counterparts in terms of both objective evaluation and visual perception.