VoxEU Column Education

Failing in mathematics

This column shows that randomised computer-aided instruction in mathematics increased student achievement and that the effect is larger for students in large, heterogeneous classes. Also, the costs of maintaining a computer-aided instruction lab are equivalent to those associated with reducing class sizes.

Success in the global economy requires a degree of sophistication and facility in mathematics and sciences. Yet at a time when mathematics achievement is critically important both for individuals and the US economy as a whole, students' proficiency levels remain dramatically low.1 On the most recent NAEP mathematics assessment, 39% of 12th graders failed to score above the basic achievement level – which is especially sobering given the strong connection between success in mathematics and success in college and the workplace. Adelman (2006) finds that students completing Algebra II relative to Algebra I are more than five times as likely to earn a bachelor's degree, which has spillover effects on future income. Unfortunately, research continually shows that students in the US graduate from high school unprepared for college or work.2 This means they are also unprepared to compete in a world where quantitative studies are increasingly important.

Teachers matter

Contributing to students' poor performance is the difficulty school districts face in attracting and retaining high-quality math teachers. Studies suggest that teacher quality has a significant impact on student achievement in mathematics (see, e.g., Braswell et al. 2001 or Boyd et al. 2007), yet a November 2008 Education Trust study found that 22% of all mathematics courses in secondary schools are taught by teachers with neither a state certification nor an academic major in mathematics or a math-related subject.

Unquestionably this is partly explained by finances as public schools cannot compete with non-education private sector salaries (Murnane and Steele, 2007). However, this means that unqualified teachers are particularly prevalent in high-poverty and high-minority schools. 40.5% of math classes in high-poverty secondary schools are taught by "out-of-field" teachers.3 In more affluent schools the corresponding statistic is 16.9%. And almost a third of math classes in secondary schools serving mostly African-American and Latino students lack qualified teachers – which is nearly double the percent of classes in schools with few minority students.

Using computers to help teachers

Many school districts struggling to find creative, effective, and feasible approaches to enhance mathematics achievement and compensate for the number of out-of-field teachers are turning to advances in computer-aided instruction. Although computer accessibility has increased dramatically over the past decade, the evidence on the effectiveness of technology on student achievement has been mixed at best (see, e.g., Wang, Wang, and Ye, 2002 and Wenglinsky, 1998 for contrasting findings). Determining the potential benefits of computer-aided instruction programs could have significant implications – both in addressing the overwhelming need to improve stagnantly low levels of mathematics achievement and in identifying practical, cost-effective solutions for school districts.

Can computer-aided instruction programs help?

Empirically concluding whether computer-aided instruction programs improve student performance in math is the obvious first step in formulating policy. While there are conflicting findings on this question, the majority of current research lacks randomised control study designs to account for factors like individual teacher effects and student ability that might be correlated with the use of computers in the classroom and student outcomes. More explicitly, this research is potentially affected by two sources of bias.

  • Principals may assign students they think would benefit from computerised instruction to the computer labs, resulting in upward biased estimates; and
  • If teachers who are assigned to the lab are those who are more willing to use computerised instruction, the estimated effects could be capturing the effectiveness of the teacher rather than the effectiveness of the computer program.

Second, meaningful policy discussion requires understanding the underlying reasons why a reform works. On this point, even among the studies that include a randomised evaluation of computer-aided instruction programs, few examine the mechanisms explaining why such technology either helps or hinders achievement. For example, Ragosta, Holland, and Jamison (1982) and Banerjee et al. (2007) both found beneficial effects of computer-aided instruction programs in math, but neither offers evidence on why these effects occur. While several hypotheses explaining the benefits of computer-aided instruction programs exist – including increased student engagement and motivation, and providing students with a greater level of individualised instruction – there is no supporting empirical evidence.

Barrow, Markman, and Rouse (2008), examines the effect of a popular pre-algebra/algebra computer-aided instruction program, I Can Learn©, in three US urban schools districts suffering similar problems of underachievement and teacher recruitment. Each school district agreed to the implementation of a within-school random assignment design at the classroom level, thereby avoiding the sources of student and teacher bias previously described. Additionally, since I Can Learn© subject lessons are designed so each student progresses through the material at her own pace and the teacher's primary role is to provide targeted help when needed, it is a well-suited program for testing the individualised instruction hypothesis mentioned above.

The OLS estimates of the 'intent-to-treat' effect of random assignment to a computer-aided instruction class on a student's score on an independently designed post-treatment test (controlling for a vector of student characteristics) suggest that the computer-aided instruction increased student achievement by at least 0.17 of a within sample standard deviation. After controlling for teacher fixed effects and limiting the sample to students who also took a pretest in the beginning of the academic year, the effect of computer-aided instruction is larger – slightly over 0.28 of a within sample standard deviation.

While the 'intent-to-treat' effect is representative of the gains a policymaker can realistically expect to observe with the program, it does not necessarily represent the effect of the program for students who actually complete it. Randomisation can yield 'contamination' - where students in the control group complete some lessons in the computer lab. One can, however, use random assignment to the computer lab as an instrumental variable for actual participation and estimate instrumental variables (IV) models to address the contamination. These estimates suggest that the 'treatment-on-the-treated' effect is larger, with post-test scores for students using the computer lab of roughly 0.25 to 0.42 of a standard deviation higher (without and with teacher fixed effects).

These findings are consistent with Ragosta, Holland, and Jamison (1982), Banerjee et al. (2007), and Wang, Wang, and Ye (2002) suggesting that computer-aided instruction programs can have a significant impact on student achievement levels in mathematics performance. However, effective policy implementation requires more than suggestive evidentiary support; policymakers must also have some understanding of how and under what circumstances the proposed reform works. When Barrow, Markman, and Rouse (2008) estimate models that include a classroom characteristic of interest, such as the average attendance in the prior year, the class size, or the heterogeneity in mathematics achievement, the data show greater effects for students in large, heterogeneous classes with poor attendance rates. Since these classroom characteristics would normally be disruptive and suggest a potential advantage to more individualised instruction, these findings support the theory that one of the primary benefits of a computer-aided instruction program is an increase in the amount of quality, individual instruction time each student receives.

What about costs?

One of the many obstacles faced in addressing the education system is the combination of limited financial resources and limited conclusive research on which policies are effective in increasing achievement. While school districts are always restricted by limited funds, current economic conditions and the potential for additional budget cuts highlight the importance of thoroughly examining the costs involved in implementing any measure of reform. In the case of computer-aided instruction programs, taking all expenses into consideration, the annual cost per lab is nearly $53,000. But given the evidence that computer-aided instruction aids achievement by increasing the amount of individual instruction time a student receives, it is possible computer-aided instruction could serve as a substitute for reducing class sizes. When Barrow, Markman, and Rouse (2008) benchmark the cost of computer-aided instruction against compensation costs associated with increasing the teaching staff, they conclude the two are equally cost-effective. This is a significant result as it suggests an alternative, and likely easier, means of increasing instruction time for urban and rural districts that struggle to hire highly qualified mathematics teachers.


Evidence suggests that computer-aided instruction programs have the potential to significantly enhance mathematics achievement in middle and high schools – particularly for students in large, heterogeneous classes – by increasing the amount of individualised instruction available. And, while there are costs associated with maintaining a computer-aided instruction lab, it appears these are equivalent to those associated with reducing class sizes. In light of the poor performance on mathematics assessments, the challenges faced in attracting high-quality teachers, and the predictive value of math achievement on future success, addressing stagnant outcomes in mathematics is essential. The evidence suggesting computer-aided instruction programs are an effective and practical means of improving performance consequently provides a strong case for additional rigorous evaluation and policy attention.


Adelman, Clifford. 2006. "The Toolbox Revisited: Paths to Degree Completion from High School Through College." Washington, D.C.: US Department of Education.
Banerjee, Abhijit, Shawn Cole, Esther Duflo, and Leigh Linden. 2007. "Remedying Education: Evidence from Two Randomized Experiments in India." Quarterly Journal of Economics, 122(3): 1235-1264.
Barrow, Lisa, Lisa Markman, and Cecilia Elena Rouse. 2008. "Technology's Edge: The Educational Benefits of Computer-Aided Instruction." National Bureau of Economic Research Working Paper 14240.
Boyd, Donald, Daniel Goldhaber, Hamilton Lankford, and James Wyckoff. 2007. "The Effect of Certification and Preparation on Teacher Quality." The Future of Children, 17(1): 45-68.
Braswell, James S., Anthony D. Lutkus, Wendy S. Grigg, et al. 2001. The Nation's Report Card: Mathematics 2000. Washington, D.C.: National Center for Education Statistics.
Grigg, Wendy, Patricia L. Donahue, and Gloria Dion. 2007. The Nation's Report Card: 12th-Grade Reading and Mathematics 2005. Washington, D.C.: National Center for Education Statistics.
Grogger, Jeffrey. 1996. "Does School Quality Explain the Recent Black/White Wage Trend?" Journal of Labor Economics, 14(2): 231-253.
Ingersoll, Richard M. 2008. Core Problems. Washington, D.C.: The Education Trust.
Lee, Jihyun, Wendy S. Grigg, and Gloria S. Dion. 2007. The Nation's Report Card: Mathematics 2007. Washington, D.C.: US Department of Education.
Murnane, Richard J. and Jennifer L. Steele. 2007. "What is the Problem? The Challenge of Providing Effective Teachers for All Children." The Future of Children, 17(1): 15-43.
Murnane, Richard J., John B. Willet and Frank Levy. 1995. "The Growing Importance of Cognitive Skills in Wage Determination." Review of Economics and Statistics 77(2): 251-266.
Parsad, Basmat and Laurie Lewis. 2003. Remedial Education at Degree-Granting Postsecondary Institutions in Fall 2000. Washington, D.C.: National Center for Education Statistics.
Ragosta, Marjorie, Paul W. Holland, and Dean T. Jamison. 1982. "Computer-Assisted Instruction and Compensatory Education: The ETS/LAUSD Study Final Report." Education Testing Service Project Report 19.
Wang, Xiaoping, Tingyu Wang, and Renmin Ye. 2002. "Usage of Instructional Materials in High Schools: Analyses of NELS Data." Paper presented at the Annual Meeting of American Educational Research Association, New Orleans, LA.
Wenglinsky, Harold. 1998. "Does it Compute? The Relationship Between Educational Technology and Student Achievement in Mathematics." ERIC Document Reproduction Service No. ED425191

1 See Grogger (1996); Murnane, Willet and Levy (1995); and Lee, Grigg, and Dion (2007).
2 Using NCES data, Parsad and Lewis (2003) found that in 2000, 22% of incoming freshman at all institutions enrolled in remedial math courses.
3 Termed coined by Ingersoll (2008).


7,874 Reads