دانلود مقاله ISI انگلیسی شماره 118792
ترجمه فارسی عنوان مقاله

چند روش ضعیف احساسات برچسب یادگیری در احساسات صریح سیگنال برای میکروبلاگ چینی بر اساس

عنوان انگلیسی
Multi-modality weakly labeled sentiment learning based on Explicit Emotion Signal for Chinese microblog
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
118792 2018 30 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Neurocomputing, Volume 272, 10 January 2018, Pages 258-269

ترجمه کلمات کلیدی
سیگنال احساسی صریح، یادگیری احساسات چند گانه، رسانه های صلیبی، نمونه ضخیم برچسب دار، انتقال دامنه،
کلمات کلیدی انگلیسی
Explicit Emotion Signal; Multi-modality sentiment learning; Cross media; Weakly labeled sample; Domain transfer;
پیش نمایش مقاله
پیش نمایش مقاله  چند روش ضعیف احساسات برچسب یادگیری در احساسات صریح سیگنال برای میکروبلاگ چینی بر اساس

چکیده انگلیسی

Understanding the sentiments of users from cross media contents which contain texts and images is an important task for many social network applications. However, due to the semantic gap between cross media features and sentiments, machine learning methods need a lot of human labeled samples. Furthermore, for each kind of media content, it is necessary to constantly add a lot of new human labeled samples because of new expressions of sentiments. Fortunately, there are some emotion signals, like emoticons, which denote users’ emotions in cross media contents. In order to use these weakly labels to build a unified multi-modality sentiment learning framework, we propose an Explicit Emotion Signal (EES) based multi-modality sentiment learning approach which uses huge number of weakly labeled samples in sentiment learning. There are three advantages in our approach. Firstly, only a few human labeled samples are needed to reach the same performance which can be obtained by the traditional machine learning based sentiment prediction approaches. Secondly, this approach is flexible and can easily combine text and vision based sentiment learning through deep neural networks. Thirdly, because a lot of weakly labeled samples can be used in EES, trained model is more robust in different domain transfer. In this paper, firstly, we investigate the correlation between sentiments and emoticons and choose emoticons as the Explicit Emotion Signals in our approach; secondly, we build a two stages multi-modality sentiment learning framework based on Explicit Emotion Signals. Our experiment results show that our approach not only achieves the best performance but also only needs 3% and 43% training samples to obtain the same performance of Visual Geometry Group (VGG) model and Long Short-Term Memory (LSTM) model in images and texts, respectively.