دانلود مقاله ISI انگلیسی شماره 138497
ترجمه فارسی عنوان مقاله

نوع جدیدی از عملکرد فعال سازی در شبکه های عصبی مصنوعی: تابع فعال سازی آموزش دیده

عنوان انگلیسی
A novel type of activation function in artificial neural networks: Trained activation function
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
138497 2018 22 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Neural Networks, Volume 99, March 2018, Pages 148-157

ترجمه کلمات کلیدی
تابع فعال سازی، تابع فعال سازی آموزش دیده، شبکه های عصبی مصنوعی، وزن تصادفی شبکه عصبی مصنوعی،
کلمات کلیدی انگلیسی
Activation function; Trained activation function; Artificial neural network; Random weight artificial neural network;
پیش نمایش مقاله
پیش نمایش مقاله  نوع جدیدی از عملکرد فعال سازی در شبکه های عصبی مصنوعی: تابع فعال سازی آموزش دیده

چکیده انگلیسی

Determining optimal activation function in artificial neural networks is an important issue because it is directly linked with obtained success rates. But, unfortunately, there is not any way to determine them analytically, optimal activation function is generally determined by trials or tuning. This paper addresses, a simpler and a more effective approach to determine optimal activation function. In this approach, which can be called as trained activation function, an activation function was trained for each particular neuron by linear regression. This training process was done based on the training dataset, which consists the sums of inputs of each neuron in the hidden layer and desired outputs. By this way, a different activation function was generated for each neuron in the hidden layer. This approach was employed in random weight artificial neural network (RWN) and validated by 50 benchmark datasets. Achieved success rates by RWN that used trained activation functions were higher than obtained success rates by RWN that used traditional activation functions. Obtained results showed that proposed approach is a successful, simple and an effective way to determine optimal activation function instead of trials or tuning in both randomized single and multilayer ANNs.