اندازه گیری جدید از تصورات غلط روانی: روابط با پس زمینه های علمی، تفکر انتقادی و پذیرش ماوراء الطبیعه و ادعاهای شبه علمی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|35310||2014||9 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 11024 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Learning and Individual Differences, Volume 36, December 2014, Pages 9–18
Many studies of psychological misconceptions have used tests with methodological and psychometric shortcomings, creating problems for interpreting individual differences related to misconceptions. To address these problems, we developed the Test of Psychological Knowledge and Misconceptions (TOPKAM), administering it to two samples of psychology students. Results from the first study (N = 162) supported the TOPKAM's internal consistency and showed that the number correct on the TOPKAM was significantly predicted by measures of paranormal belief, faith in intuition, the ability to distinguish scientific fields and practices from pseudoscientific ones, and SAT scores. Also, scores on a measure of critical thinking dispositions in psychology predicted TOPKAM scores. A second study (N = 178) supported the TOPKAM's test–retest reliability at four weeks and showed that TOPKAM scores were significantly predicted by the same critical thinking dispositions measure and also by scores on a test of critical thinking, argument analysis skill.
The study of misconceptions has become an important and frequently researched topic, partly because of the hope that science education can contribute to the rejection of incorrect but popular ideas. Several studies have shown that misconceptions regarding scientific issues are prevalent (e.g., Crowe and Miura, 1995 and Swami et al., 2012). Of particular interest are the many studies suggesting that students are highly susceptible to psychological misconceptions (e.g., Brown, 1983, Kowalski and Taylor, 2009, Lamal, 1979, McKeachie, 1960, Standing and Huber, 2003 and Vaughan, 1977). For example, students often believe incorrectly that people with schizophrenia have split personalities and that opposites tend to attract in romantic relationships. Because misconceptions are often resistant to traditional instruction (Best, 1982, Gardner and Dalsing, 1986, McKeachie, 1960 and Vaughan, 1977), they are potentially an important obstacle to effective science teaching. Yet, the actual frequency of misconceptions and our understanding of them are limited because most studies assessing misconceptions have used tests with methodological and psychometric shortcomings. The purpose of the present investigation is to report on the development and initial validation of a new psychological misconceptions test designed to remedy some of these problems. As part of its development, we investigated its relationship to several measures expected to be related to individual differences in learning that might further inform us about the nature of psychological misconceptions. Taylor and Kowalski (2004, p. 15) defined misconceptions as “beliefs that are held contrary to known evidence.” In the case of psychological misconceptions, the relevant known evidence is high quality research that supports well-established data and theories about human behavior and mental processes. As such, psychological misconceptions are widely-held beliefs, contrary to the well-replicated findings of psychological science. For example, a recent book discusses many misconceptions based on commonsense psychology including but not limited to such paranormal claims as extrasensory perception, the claim that the mind leaves the body during an out-of-body experience, and other false beliefs commonly associated with pseudoscience (Lilienfeld, Lynn, Ruscio, & Beyerstein, 2010). Failure to reject these incorrect ideas may be due to a lack of (a) knowledge, (b) skills, or both needed to think scientifically about such questions. An alternative hypothesis is that individuals possess thinking styles and other enduring dispositions that dispose them to endorse poorly-supported claims. They may lack the interest or willingness to engage in the effortful processing and open-minded thinking needed to revise their incorrect beliefs. Or, they may be less willing than other individuals to rely upon a rational, scientific approach to evidence. A third hypothesis is that both critical thinking (CT) knowledge/skills and thinking style/dispositions are related to endorsement of misconceptions. This view is consistent with cognitive-experiential self-theory (CEST), a dual-process theory proposed by Epstein (2008; Pacini & Epstein, 1999). According to CEST, people have an intuitive-experiential system that automatically learns from experience and is largely unconscious, and a second rational-analytic system for engaging in verbal reasoning that is conscious, deliberate, and analytic. The knowledge acquired through the intuitive-experiential system is tacit and more resistant to change than the knowledge acquired through rational-analytic thinking. Some dual-process theories associate intuitive thinking with processing in a heuristic-driven cognitive system called “System 1” and reflective thinking with an analytic system called “System 2” (Stanovich & West, 2000), see also Evans (2010), Evans and Stanovich (2013), and Kahneman (2011). From the perspective of CEST, we might expect people who endorse unsubstantiated claims to be more intuitively-oriented, acquiring their misconceptions through experience and relying more on their tacit knowledge. They may also be less interested in seeking out new information that could disconfirm their experience-based knowledge and less inclined to analyze and reflect upon their misconceptions. The differences in intuitive-experiential thinking and rational-analytic thinking seem to parallel the origins of misconceptions versus scientifically-supported beliefs. Misconceptions typically originate from such informal knowledge sources as everyday conversation, the media, works of fiction, and rumors (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012) and in other cases derive from misinterpretations of personal experience (Hughes, Lyddy, & Lambe, 2013). This information is seldom supported by high-quality evidence and is tacitly accepted because it seems familiar or intuitively true. In contrast, claims that achieve the status of scientific knowledge usually develop through careful analysis of systematically-collected observations, passing the effortful, deliberate scrutiny of researchers. Indeed, some research shows that people who hold beliefs that lack empirical support tend to adopt an intuitive approach in their thinking. Saher and Lindeman (2005) found that people who endorsed greater belief in complementary and alternative medicine (CAM), the paranormal, and in magical food and health-related practices showed more faith in intuition. In contrast, those with a more rational thinking style showed less belief in the paranormal and in magical food- and health-related practices, but not less belief in CAM. These findings are consistent with a dual-process explanation, but no study has examined whether such explanations also apply to psychological misconceptions. Nevertheless, a full understanding of psychological misconceptions is not possible without a reliable and valid test that is free from problematic response biases (see the next section). To this end, we report on the development and preliminary validation of a new measure called the Test of Psychological Knowledge and Misconceptions (TOPKAM), designed to avoid some of the shortcomings of previous tests. We also investigate individual differences in CT skills and dispositions, belief in pseudoscientific and unsubstantiated claims, as well as academic background variables potentially related to belief in psychological misconceptions. 1.2. Review of misconceptions tests and their problems Since the seminal psychological misconceptions test of Nixon (1925), most tests have employed a true–false (T/F) response format (e.g., Brown, 1983, Gardner and Dalsing, 1986, Griggs and Ransdell, 1987, Gutman, 1979, Kuhle et al., 2009, Lamal, 1979, McKeachie, 1960, Taylor and Kowalski, 2004 and Vaughan, 1977). Many using the T/F format have used the Test of Common Beliefs (TCB) of Vaughan (1977) or items from it to assess introductory psychology students' psychological misconceptions (e.g., Gardner and Dalsing, 1986, Griggs and Ransdell, 1987, Gutman, 1979, Kuhle et al., 2009 and Landau and Bavaria, 2003). Each of the 80 T/F items on the TCB is scored as correct when answered false. The use of T/F response format in misconceptions tests, especially in which true responses are scored as misconceptions, can create problems when interpreting scores. For example, a yea-saying response style (acquiescence) could lead to inflated estimates of their susceptibility to misconceptions; whereas, nay-saying (counteracquiescence) could deflate estimates. Conversely, negatively keyed items could induce a response set in which some respondents who are biased in their responding to appear more positive or agreeable would produce inaccurate estimates of knowledge. In addition, T/F format with correct items always keyed false could make it easier to guess correctly when respondents discerned the pattern of correct answers in the format of the test. Other researchers have criticized misconception items with the T/F format on the grounds that they constrain responses to be completely true or completely false, a position that does not accurately capture the difference between most misconceptions and scientifically-supported ideas in psychology. For example, Brown (1984) provided several examples of misconception items written in language that allowed them to be interpreted as at least partly true. Ruble (1986) argued that because some items are too ambiguous to be answered as completely true or false, qualifiers should sometimes be used. Supporting this objection, Hughes, Lyddy, and Kaplan (2013) found that the language and response format of items in a misconceptions test affected the level of endorsement of misconceptions, with ambiguously phrased items yielding higher levels of misconceptions than non-ambiguously-phrased items. Moreover, the T/F format used in many misconceptions tests is inconsistent with the provisional status of knowledge in science. Specifically, the inductive and informal reasoning used to build scientific theories is defeasible, often resulting in conclusions that are only tentative and qualified. Indeed, many psychological misconceptions contain a kernel of truth (Hughes, Lyddy and Lamb, 2013 and Lilienfeld et al., 2010). For example, although the claim that some people are exclusively “left-brained” and others “right-brained” is false, it is at least partly true that the brain's two hemispheres subserve somewhat different functions. Yet another criticism of most T/F format tests is that they do not allow respondents to indicate that they do not know an answer. To control for this limitation, Gardner and Dalsing (1986) administered a 60-item version of the TCB to 531 college students in T/F format but added a third option of “don't know/no opinion.” They found that students chose this option 12.2% of the time. After discarding these responses and calculating misconceptions only from the remaining responses, they found that this change reduced the level of misconceptions by 8% on 14 common items. Although this strategy may control for guessing, it produces total test scores that are based on an unequal number of responses to items. Moreover, judging that one does not know an answer or has no opinion about a question is not necessarily equivalent to the more continuously varying judgment of one's ability to provide a correct answer. The ability to accurately assess the veracity of one's own knowledge is better viewed as a metacognitive dimension in which respondents judge the certainty of the correctness of their answers. Another potential problem is that responding with “no opinion” about a question might indicate a lack of motivation to answer the question. This ambiguity suggests the need to separate the assessment of a knowledge dimension underlying misconceptions from the metacognitive dimension reflected by confidence or certainty in a knowledge response. One study, conducted by Landau and Bavaria (2003), has assessed confidence on a continuous scale, asking respondents to rate their confidence after answering each question using a 5-point Likert scale. They found that respondents were significantly more confident on incorrect items (misconceptions) than on items they got correct, consistent with the hypothesis that most people are not aware that they are endorsing misconceptions. Few studies have dealt with the problems of the T/F format when misconceptions are always associated with a true response. In one study, Brown (1983) reworded 18 of 37 false items obtained from lists of misconceptions in instructional and other materials. He found that only 19 of the 37 (both true and false) items were missed by at least 50% of the students and concluded that misconceptions may be less frequent than supposed. In another study, Kowalski and Taylor (2009) developed a true–false instrument designed to measure adherence to psychological misconceptions along with knowledge of psychology. About half of their test items were false when correct, and misconceptions were intermixed with more conventional general psychology questions. Although their new test showed clear improvements over previous T/F misconception tests, Kowalski and Taylor did not report the reliability and validity of their new instrument, and did not assess guessing or other metacognitive aspects of response. Exploring another alternative to T/F misconception tests, McCutcheon (1991) developed a 62-item, multiple-choice test with response options that presented both factual and incorrect psychological information. Although the multiple-choice response format may have lowered the probability that respondents would guess half of the items correctly, wording of the response options was sometimes inconsistent and seemed to target different aspects of a psychological construct within the same item. Support for the validity of this misconceptions test (see also McCutcheon, 1991) came when McCutcheon, Apperson, Hanson, and Wynn (1992) found that performance on the Watson-Glaser Critical Thinking Test and GPA predicted performance on this test. Taylor and Kowalski (2004) similarly found that performance on their misconception test was positively correlated with six items from the Scottsdale Critical Thinking Test. The results of these studies support the hypothesis that endorsing misconceptions is associated with poorer CT skills; but they do not address other aspects of CT, such as thinking dispositions and metacognition (Bensley, 2011a and Halpern, 1998). In a more recent attempt to rectify the problems of T/F tests, Gardner and Brown (2013) developed a new test of psychological misconceptions based on the 50 Great Myths of Popular Psychology of Lilienfeld et al. (2010). They stated some misconceptions in T format and others in F format to examine the effect of wording the truth value of items. To take into account the fact that misconceptions are not completely false, the test employs a Likert-type scale assessing endorsement of misconceptions on a scale ranging from “completely false” to “completely true.” Furthermore, to take guessing into account, the test allows respondents to report that they did not know the answer, using the “don't know/no opinion” option of Gardner and Dalsing (1986). Although their test showed good internal consistency, problems remain with regard to interpreting “don't know/no opinion” responses. 1.3. Development of a new test of psychological misconceptions We developed the Test of Psychological Knowledge and Misconceptions (TOPKAM) to address the limitations of earlier misconception tests (Bensley & Lilienfeld, 2010). To reduce potential bias in responding associated with the T/F format, we constructed the test in a forced-choice format in which an evidence-based response option representing factual knowledge from psychology is counterposed against an alternative corresponding to the misconception. Each item presents a common misconception paired with an evidence-based response option contradicting the misconception based on literature reviews found in the Lilienfeld et al. (2010) book and other sources. For example, a common misconception discussed in the Lilienfeld et al. (2010) book is that venting of anger is a good way to control it. This was expressed in the TOPKAM false option as “It is better to express your anger or ‘blow off steam’ than to hold it in.” Contrary to this false option, we constructed the correct option, “It is better to control the expression of your anger” consistent with the research (Bushman, Baumeister, & Strack, 1999). To address the fact that psychological knowledge is tentative and rarely is completely true or false, TOPKAM's general instructions ask test takers to answer questions by judging which option is “best” in each question. Likewise, question stems ask them to select the option that is “most true.” To address the problem of guessing and the unequal number of responses comprising test scores when “don't know/no opinion” leads to eliminating responses, the TOPKAM treats guessing as part of a separate dimension of certainty (analyses of correlations of scores on this dimension are reported in separate manuscripts). Specifically, respondents rate the certainty of the correctness of their answer following each question. To evaluate the psychometric quality of the TOPKAM, we used two samples to assess its reliability and validity. We examined its internal consistency, reliability, and concurrent validity with measures of individual difference variables thought to be related to misconceptions. In the first study, we examined TOPKAM's concurrent validity by administering it with measures of knowledge of science, pseudoscience, paranormal belief, and CT dispositions. We also examined its relation to academic background variables, such as GPA, SAT, and number of course credits earned, all of which would ostensibly be related to performance on a knowledge-based test.