شبکه فشرده کیفی فشرده سازی عمیق
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|154256||2017||24 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 9976 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neurocomputing, Volume 251, 16 August 2017, Pages 1-15
Recently, Convolutional Neural Networks (CNNs) have become a popular choice to tackle image classification tasks. Despite that, it is almost infeasible to embed the CNNs into resource limited hardware (e.g. mobile devices) due to its extremely high memory requirement. To address this problem, several methods were proposed to reduce the CNN memory requirement with a minimum compensation on the classification accuracy. In this paper, we propose a novel one-shot deep compression method based on the fuzzy quantity space to remove redundant CNN weights. Experiments in three public datasets (i.e. MNIST, CIFAR-10 and ImageNet) showed that our proposed approach is able to compress the CNN up to 14 Ã with a minimal loss of classification accuracy. Also, we present the first attempt to train an end-to-end fuzzy qualitative deep compression model in the fine-art paintings classification problem. We argue that the classification of fine-art collections is a more challenging problem in comparison to objects classification. This is because some of the artworks are neither non-representational nor figurative, and might even require imagination to recognize them. Hence, a question may arise as to whether a machine is able to capture imagination in paintings. One way to find out is by training a deep model and then visualize the low-level to high-level features learnt. Extensive experiments have been conducted on the recently publicly available Wikiart paintings dataset that consists of more than 80,000 paintings and our solution achieve state-of-the-art results (68%) in overall performance. The source code and models are available at: https://github.com/cs-chan/fuzzyDCN.