رگرسیون بردار پشتیبانی بر اساس مدل پیش بینی از پاسخ های موثر برای طراحی فرم محصول
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
25280 | 2010 | 8 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Computers & Industrial Engineering, Volume 59, Issue 4, November 2010, Pages 682–689
چکیده انگلیسی
In this paper, a state-of-the-art machine learning approach known as support vector regression (SVR) is introduced to develop a model that predicts consumers’ affective responses (CARs) for product form design. First, pairwise adjectives were used to describe the CARs toward product samples. Second, the product form features (PFFs) were examined systematically and then stored them either as continuous or discrete attributes. The adjective evaluation data of consumers were gathered from questionnaires. Finally, prediction models based on different adjectives were constructed using SVR, which trained a series of PFFs and the average CAR rating of all the respondents. The real-coded genetic algorithm (RCGA) was used to determine the optimal training parameters of SVR. The predictive performance of the SVR with RCGA (SVR–RCGA) is compared to that of SVR with 5-fold cross-validation (SVR–5FCV) and a back-propagation neural network (BPNN) with 5-fold cross-validation (BPNN–5FCV). The experimental results using the data sets on mobile phones and electronic scooters show that SVR performs better than BPNN. Moreover, the RCGA for optimizing training parameters for SVR is more convenient for practical usage in product form design than the timeconsuming CV.
مقدمه انگلیسی
The basic assumption for modeling consumers’ affective responses (CARs) is that there exists a cause-and-effect relationship between CARs and the product form features (PFFs); that is, specific PFFs will produce different subjective feelings (Han & Hong, 2003). Therefore, by analyzing the relationship between CARs and the PFFs in a systematic way, a prediction model can be constructed to facilitate product development. With the aid of the prediction model, an especially designed product form that targets specific consumer groups can be produced more objectively and efficiently instead of only relying on the designers’ intuition and experience. The crux to constructing such a prediction model is how to deal with the inter-attribute correlations that exist between product attributes and how to reconcile the nonlinear properties of these attributes (Park and Han, 2004 and Shimizu and Jindo, 1995). There have been some attempts to define the relationship between the PFFs. The most noted research was by Kansei engineering (Nagamachi, 1995). The most adapted techniques in the product design field such as multiple regression analysis (Park & Han, 2004) and quantification theory type I (Jindo, Hirasago, & Nagamachi, 1995) depend heavily on an assumption of linearity and, therefore, cannot deal effectively with nonlinear relationships. In addition, prior to establishing a mathematical model, data simplification and variable screening is often needed to obtain better results (Han, Yun, Kim, & Kwahk, 2000). Fuzzy regression analysis (Shimizu & Jindo, 1995) and other methods suffer from the same shortcomings (Park & Han, 2004). To deal with the nonlinearity of many-to-many mapping between variables, neural network (NN) is a good candidate for building the prediction model. A few researches have illustrated the use of NN in the product design field. For example, Hsiao and Huang (2002) demonstrated the ability of NN to deal with nonlinear relationships between the PFFs. In later research by Hsiao and Tsai (2005), NN was used as part of a hybrid framework for a product form search. However, NN suffers from a number of shortcomings. NN is considered a “black-box” necessitating numerous control parameters and it is difficult to obtain a stable solution. Another drawback of NN, which is shared by all types of black-box models, is that the data of the resulting model and its parameters are difficult to interpret. In addition, NN follows the empirical risk minimization (ERM) approach, which is commonly employed by conventional machine learning methods. In the ERM approach, a measure of the prediction error, such as the root mean square error (RMSE), pertaining to the training set outputs, is minimized. Since the ERM is based exclusively on the training set error, it does not guarantee that the resulting model will give a good generalization performance. Vapnik (1995) developed a new kind of NN algorithm called support vector machine (SVM). SVM follows the principle of structural risk minimization (SRM), seeking to minimize an upper bound of the generalization error rather than minimize the training error (the principle followed by NN). SVM has been shown to provide better performance than traditional learning techniques (Burges, 1998). SVM’s remarkable performance with respect to sparse and noisy data makes it a first choice in a number of real-world applications such as pattern recognition (Burges, 1998) and bioinformatics (Scholkopf, Guyon, & Wetson, 2003). SVM is also known for its elegance in solving nonlinear problems with the “kernels” technique, which automatically carries out a nonlinear mapping to a feature space. With the introduction of an ɛ-insensitive loss function, SVM can be extended to solve function estimation problems. This is known as support vector regression (SVR). The properties of SRM equip the SVR model with a greater potential for generalizing the input–output relationship learnt during the training process. SVR has also been shown to exhibit excellent performance which benefits from their roots in SVM ( Vapnik, Golowich, & Smola, 1997). Despite being endowed with a number of attractive properties, SVR has yet to be applied widely in the field of product design. In this paper, SVR has been introduced for the purpose of developing a model that effectively predicts CARs. The remainder of the paper is organized as follows: Section 2 gives an introduction to SVR. Section 3 presents the proposed prediction model of CAR for product form design. Section 4 demonstrates the experimental results using mobile phone and electronic scooter as examples. Section 5 presents some brief conclusions. Finally, several suggestions for future research to extend this study are described in Section 6.
نتیجه گیری انگلیسی
In this study, SVR is used to develop the prediction model of CARs. The prediction model can be constructed using the PFFs as input data and the adjective evaluation score gathered from questionnaires as output value. The optimal training parameters of the prediction model were determined by the RCGA, which minimizes the RMSE of 5-fold CV error of the training data. The performance of SVR–RCGA using two different kernel functions, including polynomial and RBF, were compared. Using the data set of mobile phone design, for RCGA constructed with a polynomial kernel, the optimal parameter set (C, ɛ, ρ) was (2475.28, 0.142, 4.98) and the RMSE of the model 0.181. For SVR–RCGA constructed with a RBF kernel, the optimal parameter set (C, ɛ, σ2) was (4309.78, 0.194, 416.85) and the RMSE of the model 0.078. Therefore, the RBF kernel showed a better performance for SVR–RCGA. Also, according to the experimental results using the data set of mobile phones and electronic scooters, the RCGA is capable of determining the optimal training parameters very effectively. Compared to optimized training parameters using a grid search combined with a 5-fold CV (SVR–5FCV), the RCGA is more computationally efficient. The resulting model of SVR–RCGA has better predictive performance compared to that of SVR–5FCV and BPNN–5FCV. As a consequence, SVR–RCGA with an RBF kernel is more suitable for predicting CARs for product form design.