The Perfect Score: Evaluating the ACT
Image by iStock
Devin Pope, assistant professor of behavioral science and Robert King Steel Faculty Fellow, questions the merit of using the ACT exam as a predictive college admissions tool.
The ACT exam has been used as a tool for evaluating academic potential within the college admissions process for decades. But recent research by Devin Pope, published by the National Bureau of Economic Research, questions two of the four subtests’ validity in actually predicting positive college outcomes for applicants.
In their paper, “Improving College Performance and Retention the Easy Way: Unpacking the ACT Exam,” Pope, along with coauthors Eric Bettinger and Brent Evans of Stanford, finds that, “English and Mathematics are highly predictive of positive college outcomes while the other two subtests, Science and Reading, provide little or no additional predictive power.”
In the past decade, the popularity of the ACT exam has risen steadily among both test-takers and college admissions boards. Last year alone, more than 1.5 million students took the test, making the impacts of these findings all the more timely. In its coverage of the study’s results, the Washington Post reported, “If you are among the growing number of students who have taken or plan to take the ACT college entrance test, pay attention.”
The study suggests that by using the individual subscores of the ACT (rather than the composite score) colleges could improve their student retention rate, a major issue within higher education. In fact, the authors cite current research that evidences this problem. “After six years, 35 percent of students who started a postsecondary program in the fall of 2003 had not received a degree and were no longer enrolled in any institution of higher education.” While ACT officials question the findings of the study, the authors posit that top schools could experience dropout-rate reductions as large as 5 to 7 percent by using delineated ACT subscores.
So what compels colleges to continue the outmoded and potentially detrimental practice of analyzing composite scores? It may be that admissions officers are simply unaware of the differences in predictive power across the individual ACT subtests and have trusted ACT Inc. to get it right. Alternatively, colleges may be slow to address criticism of standardized tests because artificially high applicant test scores (based on the composite score) improve their rankings in U.S. News and World Report. High rankings increase applicant pools, which bring in more students—and more financial support through tuition dollars—regardless of the academic trajectories of students once they’re admitted.
Bettinger, Evans, and Pope take a hopeful view toward the college admissions process, however. While they acknowledge that the higher education system has complex problems, they also argue that complex problems don’t always demand complex solutions. “A better understanding of how the ACT predicts future performance could lead to easily implementable, low-cost solutions that can yield potentially large benefits.”—E.O., M.S.