دانلود مقاله ISI انگلیسی شماره 34262
ترجمه فارسی عنوان مقاله

اقدامات ساده و ساختارهای پیچیده: آیا به کارگیری مدل پیچیده تر از شخصیت در پرسشنامه پنج بزرگ ارزش دارد؟

عنوان انگلیسی
Simple measures and complex structures: Is it worth employing a more complex model of personality in Big Five inventories?
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
34262 2013 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Research in Personality, Volume 47, Issue 5, October 2013, Pages 599–608

ترجمه کلمات کلیدی
شخصیت؛ پنج بزرگ ساختار؛ تحلیل عاملی تأییدی؛ اکتشافی مدل معادلات ساختاری؛ اعتبار ساختاری؛
کلمات کلیدی انگلیسی
Personality; Big Five structure; Confirmatory factor analysis; Exploratory Structural Equation Modeling; Construct validity; Multitrait–multimethod; NEO PI-R; 16PF
پیش نمایش مقاله
پیش نمایش مقاله  اقدامات ساده و ساختارهای پیچیده: آیا به کارگیری مدل پیچیده تر از شخصیت در پرسشنامه پنج بزرگ ارزش دارد؟

چکیده انگلیسی

The poor performance of five-factor personality inventories in confirmatory factor analyses (CFAs) prompted some to question their construct validity. Others doubted the CFA’s suitability and suggested applying Exploratory Structural Equation Modeling (ESEM). The question arises as to what impact the application of either method has on the construct validity of personality inventories. We addressed this question by applying ESEM and CFA to construct better-fitting, though more complex models based on data from two questionnaires (NEO PI-R and 16PF). Generally, scores derived from either method did not differ substantially. When applying ESEM, convergent validity declined but discriminant validity improved. When applying CFA, convergent and discriminant validity decreased. We conclude that using current personality questionnaires that utilize a simple structure is appropriate.

مقدمه انگلیسی

Researchers who investigate normal adult personality have reached a consensus on five broad factors, often called the ‘Big Five’ (Goldberg, 1990), and on their conceptual definitions (Digman, 1990, McCrae and Costa, 1999 and Norman, 1963). These factors are known as Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, although other terms are used as well. This general consensus has allowed for cumulative research and meta-analyses of important aspects of the construct, including the development of personality over an individual’s lifespan (Judge et al., 1999 and Terracciano et al., 2010), differences between groups (Goldberg et al., 1998 and Schmitt et al., 2008), the existence of a general factor of personality (Musek, 2007 and van der Linden et al., 2010), a prediction of external criteria (Grucza and Goldberg, 2007 and Hurtz and Donovan, 2000), and many more. In research and practice, personality is predominantly assessed using self-report questionnaires. Many of these questionnaires contain items that contribute to one of many first-order scales that are combined to represent the Big Five factors. The internal structure of personality, i.e., the assignment of subscales to the five factors, has commonly been examined using an exploratory factor analysis (EFA; Aluja et al., 2005b, Cattell and Cattell, 1995 and Costa and McCrae, 1992b). This assignment is extremely important because it forms the basis for obtaining scores for the higher-order personality factors. In general, a simple structure (Thurstone, 1947) where each first-order scale is uniquely assigned to only one of the Big Five factors is assumed to be appropriate. As in many other research areas in which constructs are assessed using self-report questionnaires, CFAs were eventually applied to personality data. The results of these studies were largely discouraging. The CFA model fit indices frequently exceeded proposed cut-off values for acceptable model fits and, based on CFA standards, did not confirm the simple structure (Church and Burke, 1994, Hopwood and Donnellan, 2010, McCrae et al., 1996 and Vassend and Skrondal, 2011). Several cross loadings (i.e., links between first-order scales and factors other than the originally postulated higher-order personality factors) usually needed to be included in the model to achieve an acceptable fit. The more complex models, however, were difficult to interpret and often displayed less of a good fit in cross-validation samples (e.g., Church and Burke, 1994 and Hopwood and Donnellan, 2010). This has raised concerns if the currently proposed composition of the broad factors provides an adequate assessment of an individual’s personality. These higher-order scores are commonly used in research studies and in practical applications of personality instruments. Thus, confidence is required regarding the suitability of the Big Five factors as a ‘common language’ for describing personality. Adding additional cross loadings as suggested by CFA also changes the meaning of the observed scores. Subsequently, one must question how the construct validity of personality instruments is affected when subscales contribute to more than one broad factor. In the present study we address these concerns in two ways: First, we determine the ‘change of scores’ which – in this examination – refers to a difference in the relative position of an individual within a sample on the trait continuum measured as the correlation between the original scores and scores obtained after incorporating the CFA cross loadings. Second, we examine the impact on the instruments’ construct validity resulting from the modified models. To complement our investigation and consider more recent trends in factor analysis, we also apply Exploratory Structural Equation Modeling (ESEM; Asparouhov & Muthen, 2009), a method that integrates CFA and EFA. ESEM is less restrictive than CFA as it does not constrain the non-target loadings to be zero. In difference to CFA, in ESEM a model can be specified only with regard to the number of factors. Further restrictions can be added and tested using chi-square difference tests. In difference to EFA, ESEM provides typical CFA parameters, such as standard errors and goodness of fit statistics as well as the possibility to test for measurement invariance between groups and across time (Asparouhov & Muthen, 2009). Due to these possibilities and advantages of ESEM, it has been promoted to be applied in the psychometric evaluation of psychological instruments (Marsh, Liem, Martin, Morin, & Nagengast, 2011). We applied a CFA and ESEM to data from 620 respondents who completed two established personality questionnaires (the NEO PI-R and the 16PF questionnaire). Using two different sets of modification criteria to determine cross loadings when conducting the CFA, we generated two more complex models for each instrument. We computed scores based on these modified CFA models using two different approaches: (a) we applied the scoring rules for the instrument provided in the respective test manual but added the additional subscales, as identified in the CFA and (b) we used the factor scores obtained from the respective modified CFA model. The first approach mirrors current usage in research, in which manifest, rather than latent, Big Five scores are employed (Barrick and Mount, 1996, Grucza and Goldberg, 2007, Hurtz and Donovan, 2000 and Salgado, 2003). The second approach uses scores that correspond more directly with the CFA models. With regard to the application of ESEM, we used the factor scores obtained from applying the method from both instruments. To assess the relative score changes, we computed correlations between scores from the original model and the scores obtained from the CFA and ESEM models. The results of this analysis support a more nuanced discussion of the discrepancy between current personality theories and the more complex model of personality, as suggested by the CFA. Applying ESEM offers further insight into how Big Five scores based on a more recent factor-analytical method. To determine the impact on the questionnaires’ construct validity, we applied the multitrait–multimethod (MTMM) approach, which was developed by Campbell and Fiske (1959), to the original model as well as the models proposed by CFA and ESEM. A comparison of the MTMM results across the models showed the extent to which the relationships within and between the five factors of both instruments changed as one moved from a simple to a more complex structure, thus determining changes in the convergent and discriminant validity. Previous studies have focused mainly on investigating the congruence between results obtained from the EFA and CFA of an instrument without examining the impact of the observed discrepancies on scale scores and construct validity beyond the internal structure (e.g., Aluja et al., 2005a, Borkenau and Ostendorf, 1990 and McCrae et al., 1996). In other studies, CFAs were applied to several instruments, but it was not determined how the relationships between the constructs were affected by changes in the model proposed by the CFAs (e.g., Church and Burke, 1994 and Hopwood and Donnellan, 2010). In our study, we address those gaps by determining how the scores of and the relationships between personality scales change when the internal structure is more complex, as suggested by CFA. As a result, we extend the examination of construct validity beyond the internal structure to focus on changes in the convergent and discriminant validity within and across the two instruments. The study thus follows a suggestion made, among others, by Hopwood and Donnellan (2010) that “there is a need to document that misspecifications have practical or substantive consequences beyond simply contributing to model misfit” (p. 343). Considering the complexities and difficulties in identifying the correct model in CFA based on modification indices and other model assessment criteria (Fan and Sivo, 2007 and MacCallum et al., 1992), we do not aim at determining the “true” model of personality. Instead, we provide an empirical illustration, i.e., to demonstrate by way of example the impact that this added complexity would have on scores and construct validity. By also applying ESEM to both instruments, we shed light on how this more recent but increasingly used method may affect the resulting factor scores and subsequently the instruments’ construct validity.