شبکه های عصبی مکرر عمومی برای رگرسیون بردار پشتیبانی حساس ε
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|25871||2012||8 صفحه PDF||سفارش دهید||2947 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Mathematics and Computers in Simulation, Volume 86, December 2012, Pages 2–9
In this paper, a generalized recurrent neural network is proposed for solving ϵ-insensitive support vector regression (ϵ-ISVR). The ϵ-ISVR is first formulated as a convex non-smooth programming problem, and then a generalize recurrent neural network with lower model complexity is designed for training the support vector machine. Furthermore, simulation results are given to demonstrate the effectiveness and performance of the proposed neural network.
Support vector machines (SVMs) are powerful tools for data classification and regression. In the recent years, many fast algorithms for SVMs have been developed . Mangasarian  proposed the finite Newton algorithm for SVMs learning. Keerthi and DeCoste  introduced the modified finite Newton algorithm to speed up the finite Newton algorithm for fast solution of large scale linear SVMs. More recently, as a software and hardware implementable approach, recurrent neural networks for solving linear and nonlinear optimization problems with their engineering applications have been widely developed , , ,  and . Compared with traditional numerical optimization algorithms, the neural networks have fast convergence rate in real-time solutions. In 1986, Tank and Hopfield  proposed a recurrent neural network for solving the linear programming problems for the first time. In 1988, the dynamical canonical nonlinear programming circuit (NPC) was introduced by Kennedy and Chua  for optimization by utilizing a finite penalty parameter, which can generate the approximate optimal solutions. Wang and Xia  proposed a primal-dual neural network for solving the linear assignment problems. To get the optimal solutions of non-smooth optimization problems, Forti et al.  proposed and investigated the generalized NPC (G-NPC), which can be considered as a natural extension of NPC. In order to reduce the model complexity, some one-layer recurrent neural networks with lower model complexity have been constructed for solving linear and nonlinear programming problems  and . This paper is concerned with a generalized recurrent neural network for the ϵ-insensitive support vector regression. The global convergence of the proposed recurrent neural network is guaranteed using the Lyapunov-like method. Compared with the existing neural networks for support vector regression (SVR) learning, the proposed neural network herein has lower model complexity, but is efficient for SVR learning.
نتیجه گیری انگلیسی
In this paper, based on the non-smooth analysis and gradient method, a generalized recurrent neural network with a discontinuous activation function has been proposed for support vector regression training. Simulation results show that the neural network is efficient for support vector regression training.