ارزیابی تکوینی از تصورات غلط متغیر جبری دانش آموزان
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|35309||2014||11 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : The Journal of Mathematical Behavior, Volume 33, March 2014, Pages 30–41
Gaining an accurate understanding of variables is one challenge many students face when learning algebra. Prior research has shown that a significant number of students hold misconceptions about variables and that misconceptions impede learning. Yet, teachers do not have access to diagnostic tools that can help them determine the misconceptions about variables that their students harbor. Therefore, a formative assessment for variable misconceptions was created and administered to 437 middle- and high-school students. Analyses from the test scores were found to exhibit strong reliability, predictive validity, and construct validity in addition to important developmental trends. Both teachers and researchers can use the test to identify students who hold misconceptions about variables.
Learning algebra is a “gatekeeper” to students’ future educational and career success (Adelman, 2006, RAND Mathematics Study Panel, 2003, Silver, 1997 and U.S. Department of Education, 1999). An increasing number of school districts have responded recently by adding algebra to their high school graduation requirements (Achieve, 2007). Given its importance, it is disquieting that learning algebra proves so challenging. Data from the National Assessment of Educational Progress (NAEP) show that algebra achievement of U.S. students is poor, with only 6.9% of 17-year-olds scoring at or above a proficient level (National Center for Educational Statistics, 2005). One significant problem is that many students experience difficulty mastering foundational algebraic concepts, one of which is an understanding of variables ( Knuth et al., 2005, Kuchemann, 1978 and Philipp, 1992). Moreover, misconceptions (alternative conceptions) about variables are common among students (e.g., Kieran, 1992, Kuchemann, 1978, Rosnick, 1981 and Stacey and Macgregor, 1997). Yet, diagnostic assessments about variable misconceptions are not available to teachers. Therefore, the primary goal of the current study is to develop an assessment with reliable and valid items that can specifically diagnose if students harbor misconceptions about variables. The secondary goals are to (1) determine how common misconceptions about variables are among middle- and high-school students and (2) explore developmental trends in the formation of misconceptions about variables. To understand some of the typical misconceptions that students hold about variables, it is best to begin by charting correct understanding of variables. Proper understanding of symbols as variables includes a few key components. First, the variable must be interpreted as representing an unknown quantity. That is, a student must realize that a symbol represents a unit that does not have an ascertained value. Second, a student must interpret the symbol as representing a varying quantity ( Philipp, 1992) or range of unspecified values ( Kieran, 1992). This is known as the “multiple values” interpretation of literal symbols ( Knuth et al., 2005). These first two proper interpretations have been studied by presenting seventh and eight graders who have been exposed to curriculum about variables with the problem: “The following question is about the expression ‘2n + 3.’ What does the symbol (n) stand for?” ( Knuth et al., 2005). Correct responses expressed the idea that the literal symbol (1) represents an unknown value (e.g., “the symbol is a variable, it can stand for anything”) and (2) could represent more than one value (e.g., “it could be 7, 59, or even 363.0285”). However, approximately 39% of seventh graders and almost 25% of eight graders gave incorrect responses (e.g., “I don’t know” or “nets” or “5”). These data provide clear evidence that a sizable group of students do not correctly interpret variables. A third component of understanding variables entails awareness that some kind of relationship exists between symbols as their value changes in a systematic manner (e.g., as b increases, r decreases) ( Kuchemann, 1978). Said differently, a correct interpretation of variables entails knowing that related numbers that change together are “variables.” ( Philipp, 1992). The “which is larger” problem has been relied on to assess this understanding. For example, Kuchemann (1978) presented 3000 high-school students who had been taught about variables with the following problem: “Which is the larger, 2n or n + 2? Explain.” Only 6% of students were correct and seemingly aware of a “second order relationship,” that the relation between 2n and n + 2 is actually changing with n. Indeed, the difference between 2n and n + 2 increases as n increases. When n = 2, the two expressions are equal; when n = 3, 2n > n + 2. Knuth et al. (2005) also explored this understanding, using the “which is larger” problem with middle-school students. Only about 18% of sixth graders, just over 50% of seventh graders, and 60% of eighth graders evidenced the understanding that a relationship exists between symbols because their value systematically changes. However, the issue is not as simple as students lacking correct knowledge about variables. Of additional concern is that many students actually hold erroneous concepts about variables. Often students come to school with knowledge of concepts in the curriculum. If this knowledge is inconsistent with the concepts being taught, the knowledge is termed an alternative conceptions or misconceptions ( Lucariello, 2009). Considerable research has documented that many misconceptions in mathematics and science are quite common. The current study focuses on three of the common, major misconceptions about variables that students experience (as described in the literature) and develops an instrument that detects these three misconceptions. The first of these misconceptions was initially documented by Kuchemann (1978) during an exploration of students’ interpretations of variables. Specifically, he found some students consistently ignored variables. For example, in the problem “Add 4 onto n + 5”, 68% of students answered correctly (n + 9), while 20% of students gave the incorrect answer 9, suggesting they simply ignored the variable n altogether. A second type of misconception is seen when students treat variables as a label for an object ( McNeil et al., 2010). This was shown by Stacey and Macgregor (1997) when they presented more than 2000 middle school students the following problem: “David is 10 cm taller than Con. Con is h cm tall. What can your write for David's height?” The correct answer is 10 + h, wherein 10 is added to the number or quantity denoted by h. Yet many students treated the variable as a label associated with the name of an object (e.g., C + 10 = D). Based on other research findings, interviews with individual students, and coding of students’ informal or written explanations, Stacey and Macgregor (1997) interpreted this answer to reflect ‘C’ as meaning ‘Con's height’ and D as meaning ‘David's height’. Another similar erroneous concept is seen when students interpret the variable as an abbreviated word (e.g., response of D h where the abbreviation stands for the words David's height). This misconception of construing a variable as a label for an object is reflected also in the classic error to the “Students and Professors” problem, which reads as follows: “Write an equation, using the variables S and P to represent the following statement. ‘At this university there are six times as many students as professors.’ Use S for the number of students and P for the number of professors.” An erroneous understanding that S is a label for an object (students), as opposed to a variable (number of students), led 37% of a sample of students entering college to incorrectly answer the question as 6S = P ( Rosnick, 1981). When asked to explain this answer, students stated that they believed the answer was 6S = P because S was a label for students. (The correct answer is S = 6P where S stands for number of students.) This misconception reasoning on this “student and professor” problem was prevalent also among students already in college ( Clement, Lochhead, & Monk, 1981). Another example of the misconception of a variable as a label for an object/entity is seen when students, who are given the question “In the expression t + 4, what doest represent?,” answer with “time” instead of “any number”. Finally, a third type of misconception is when students believe a variable is a specific unknown ( Kuchemann, 1978 and Stacey and Macgregor, 1997). In this case, students do not fully understand that a variable can represent multiple values, rather they believe it can only represent one fixed value. For example, when asked how many values p represents, students assume p can only hold one value, as opposed to many values. This contradicts the correct understanding of a variable previously discussed. Misconceptions are particularly important for teachers to know about, as misconceptions can impede learning. The process of student learning varies contingent on whether students’ preinstructional knowledge of a given concept(s) accords (or not) with correct curricular concepts (concepts in the domain). When student presinstructional knowledge is correct and consistent with correct curricular/domain knowledge, student knowledge is conceived of as “anchoring conceptions.” When presinstructional knowledge is incorrect and hence runs contrary to what is being taught, student knowledge is described as “misconceptions” (or “alternative conceptions”). In cases where students have misconceptions, learning is more challenging (see Gelman & Lucariello, 2002 for review; Hartnett & Gelman, 1998). If a student holds a misconception, the misconception interferes with or distorts the assimilation of other inputs, such as the correct concepts presented in a given curriculum. In these cases, learning is a matter of conceptual change or accommodation, wherein current knowledge must undergo substantial reorganization or replacement ( Carey, 1985, Carey, 1986, Posner et al., 1982, Strike and Posner, 1985 and Strike and Posner, 1992). Learning as conceptual change is more difficult than learning as conceptual growth. Conceptual growth occurs when one's anchoring conceptions (previously held knowledge that is consistent with new input) provide a base for assimilating curricular inputs, leading to the establishment of new concepts or enrichments of current ones. One example of conceptual growth would be when preinstructional knowledge about counting principles serves as a base for learning about addition ( Hartnett & Gelman, 1998). There is nothing about verbal or nonverbal counting that is inconsistent with the idea that addition involves putting together two sets. However, as noted, sometimes preinstructional knowledge is inconsistent with new concepts and conceptual change is needed. The counting principles are inconsistent with the mathematical principles underlying the numberhood of fractions ( Hartnett & Gelman, 1998). One cannot count things to generate a fraction and one cannot use counting based algorithms for ordering fractions. For instance, ¼ is not more than ½. In order to learn about fractions, conceptual change is needed as a student would need to reorganize or replace their previous theory about counting. As a result of conceptual change being difficult to accomplish, researchers have suggested specific instructional strategies to help teachers bring it out in their students. While traditional methods of instruction, such as lectures, labs, discovery learning, and reading text, can be effective at achieving conceptual growth, they are generally ineffective at bringing about conceptual change (Chinn and Brewer, 1993, Kikas, 1998, Lee et al., 1993 and Smith et al., 1997). Instead, conceptual change requires particular instructional strategies, such as raising student metacognition and creating cognitive conflict by creating experiences during which students consider their erroneous knowledge side by side with or at the same time as the correct concept or theory (see Lucariello, 2009 and Mayer, 2008). Because overcoming misconceptions requires conceptual change and conceptual change is not easily achieved, misconceptions tend to be entrenched in student reasoning (Brewer and Chinn, 1991 and McNeil and Alibali, 2005). The change-resistance account argues that children's misconceived base knowledge can easily become reinforced through uncorrected erroneous practice and as a result their ability to learn more complex tasks within a given domain suffers increasingly over time (McNeil & Alibali, 2005). For all of these reasons, it is critical for teachers to know about misconceptions their students might have about variables. Yet, resources and tools to assist them in uncovering such misconceptions are not readily available for teachers in the United States. Currently, tests that are freely available for teachers to use are typically of little diagnostic value. For example, required state and/or national standardized tests may assess variables, but are summative in nature and their purpose does not include informing individual instruction (Educational Testing Service, 2012). There are a handful of general diagnostic mathematics tests available for teachers to purchase (e.g., The Stanford Diagnostic Mathematics Tests, Group Mathematics Assessment and Diagnostic Evaluations, KeyMath Diagnostic Assessments, The Diagnostic Test of High School Math, and The Diagnostic Test of Pre-Algebra Math). While these some of these tests include a few items about variables, they do not specifically test for, diagnose, or report on the nuanced and distinct types of variable misconceptions. The algebra version of the Chelsea Diagnostic Mathematics Tests (see Hart, Brown, Kerslake, Küchemann & Ruddock, 1985) appears to be the only preexisting test that includes at least one item for each of the common misconceptions discussed in the current study. However, even in this case, the scoring and reporting is not done in a way that helps teachers identify which item(s) map to which variable misconception. The goal of the current study is to develop a test with items that are reliable and valid that can by used by teachers to diagnose the misconceptions about variables that students may have. The information on student thinking gained from such a test could be used to guide appropriate instruction and thereby facilitate student learning as conceptual change.
نتیجه گیری انگلیسی
To assess the internal consistency of the diagnostic test items, Cronbach's alphas were calculated based on both students’ correct and misconception responses on the diagnostic test. Ideally Cronbach's alpha should be above 0.7, although Cronbach's alphas of 0.6 or higher are often acceptable during test construction and especially on tests that do not have a large number of items (Cortina, 1993 and DeVellis, 2003). For correct responding, the Cronbach's alpha was 0.77, indicating that this set of items measured a single unidimensional latent construct, algebraic understanding, to a high degree. For misconception responding on the diagnostic test, the Cronbach's alpha was lower, 0.46, which makes sense when considering the test was specifically designed to test distinct misconception types. As mentioned above, students in Pool 1 (n = 217) also completed an alternate form of the diagnostic test, the isomorph problem set. As another way of assessing the reliability of the scores on the diagnostic test, the correlation between the responses on the diagnostic test and the isomorph test was ascertained. The correlations between the responses on these two tests were 0.85 for proportion of correct responses, 0.71 for proportion of incorrect responses, and 0.49 for misconceptions responses. It is important to note that the correlations for the proportion of incorrect responses may be higher than the proportion of misconception responses because the proportion of incorrect responses measures the relationship between choosing either of the two incorrect options on the diagnostic tests and either of the two incorrect options on the isomorph tests, while the proportion of misconception responses measures the relationship between choosing the one misconception option on the diagnostic test and the one misconception option on the isomorph test. The mean incorrect answer rate on the diagnostic test was 0.18 (SD = 0.18) and 0.19 (SD = 0.17) on the isomorph test. The mean misconception answer rate on the diagnostic test was 0.22 (SD = 0.13) and 0.20 (SD = 0.14) on the isomorph test. Because the correlations between the diagnostic and isomorph tests were quite high, Cronbach's alphas were calculated for correct and misconception responding for the diagnostic test items and the isomorph test items together – thus there was a total of 18 items in the scale. The reliability for correct responding increased to 0.88 and for misconception responding, the reliability increased to 0.66. The results of these analyses suggest that the diagnostic test items are reliable.