Testimony of Lisa Guisbond
Policy Analyst for the National Center for Fair & Open Testing (FairTest)
Rhode Island House Committee on Health, Education and Welfare
State of Rhode Island General Assembly
Lisa Guisbond is a Vice President at Citizens for Public Schools in Boston, MA. She is also a Policy analyst at FairTest. She has previously served as an appointee to the Massachusetts Readiness Project on the MCAS Subcommittee. She has also been an outreach coordinator at MIT as well as both an editor and writer for various publications.
February 26, 2014
Since its creation in 1985 by leaders of major civil rights, education reform and student advocacy organizations, the National Center for Fair & Open Testing, Inc. (FairTest) has studied the impact of high-stakes standardized testing on educational equity and quality.
While standardized exams have their purpose, they have mistakenly become the centerpiece of school reform, with dire consequences for our students and schools. We agree with the former U.S. Secretary of Labor, Professor Robert Reich, who said recently, “We’re turning our schools into test-taking factories. We’re teaching children how to take standardized tests rather than how to think. The irony is we’re doing this at the very time when the economy is becoming less standardized than ever. Computers and software are taking over all routine, standardized tasks. The challenges of the future require the ability to solve and identify new problems, think creatively outside standard boxes, and work collaboratively with others. An obsessive focus on standardized tests can make our children less prepared for this future rather than better prepared.”
High school graduation exams have contributed to this obsessive focus on testing. These exams are usually proposed as a way to improve educational quality and close gaps in achievement. Yet more than two decades of evidence demonstrates that they accomplish neither of these goals. In fact, such requirements most damage the very groups proponents claim they will help. Recent results for Rhode Island students on the New England Common Assessment Program, or NECAP, suggest that requiring students to pass these tests to graduate high school would be crippling for many students and the state as a whole.
It would be one thing if there were evidence that requiring students to pass such assessments to graduate from high school resulted in students getting the instruction or interventions they need. But this is not the case. Because of the overwhelming evidence that exit exams do more harm than good and do not improve the quality of education for underserved student populations, we support Senate Bills 2185 and 2059, which would end the use of standardized tests as graduation requirements (also House Bill 7256, which would do so until 2020). In addition, we support Senate 2135, which would establish a commission to study Common Core standards and assessments and their appropriateness as a graduation requirement.
Tens of thousands of U.S. students are denied diplomas each year – regardless of how well they have done in school – because they did not pass a standardized state test. Under such policies, after 12 years of playing by the rules, working hard and completing all other graduation requirements, a student’s future can hinge on just one or two points on a single standardized exam.
Misguided exit-exam mandates have also increased dropout rates, especially among students of color and low-income students, and have focused classroom teaching on test preparation rather than 21st century skills. The full record in states like Massachusetts, Texas and California shows that high-stakes tests have failed to fulfill their promise of improved quality and equity for public school students. That’s why Rhode Island civil rights and disability advocates, teachers, administrators, public school parents, and others have expressed serious concerns about Rhode Island’s new high-stakes testing policy.
Those who support using NECAP as a graduation requirement point to an increase in the numbers of students scoring high enough to meet the graduation threshold. Researchers find this is a common pattern with new high-stakes exams, as teachers and students become familiar with new tests and spend large amounts of time preparing for them, often at the expense of other important aspects of education (Koretz, 2005). A recent article quoted a Warwick, RI, principal describing the way this worked in his school. “Gerry Habershaw, the principal of Warwick Veterans Memorial High School, said there is only one way to raise test scores: teaching to the test. ‘For the past nine years, our teachers have been doing test preparation questions in class,’ he said. ‘Math teachers do the problem of the day. We’ve got to get students used to how [the NECAP] asks questions” (Borg, 2014).
Still, more than a quarter of Rhode Island high school seniors remain at risk of not graduating. As the Rhode Island American Civil Liberties Union, the Providence Student Union and others have pointed out, the results indicate that the NECAP graduation test will harm the groups of students that education leaders say they most want to help. Rhode Island’s results follow a pattern established in other exit exam states of disproportionate impact on vulnerable student subgroups. Rhode Island’s recent NECAP results put 34% of low-income, 37% of African-American and Latino, 61% of English language learners and 56% of students with disabilities at risk of being denied high school diplomas (GoLocalProv, 2014).
The problems exit exams are meant to solve are certainly real. Rhode Island, like most states, has gaps in educational opportunity, quality and outcomes. Unfortunately, exit exams don’t solve these problems. Even states lauded for “successful” implementation of graduation tests, such as my home state of Massachusetts, concede that unacceptably large opportunity and achievement gaps persist. The former Massachusetts secretary of education, Paul Reville, reflected on the state’s 20 years of education reform in a recent commentary. “We were going to eliminate the correlation between zip codes and educational achievement and attainment. I’m sorry to say that, two decades later, it is clear that we’ve failed to meet that challenge. There is still an iron-law correlation in the commonwealth between socioeconomic status and academic achievement” (2013).
Former New Bedford Mayor Scott Lang has been a critic of the Massachusetts MCAS graduation test, having seen its devastating impact in his community. He says that graduation tests take students who stay in school and complete all of their high school requirements and, because of a test score, lump them with high school dropouts. Though they did not drop out, many of these students will be unable to find gainful employment or opportunity in their lifetimes. In a commentary for Commonwealth Magazine, Lang wrote, “In New Bedford and in other cities, this means that the social costs posed by individuals without high school diplomas will grow every year, decreasing our tax base and creating an increased tax burden for our residents” (2008).
Lang estimated that a household headed by a high school dropout costs society approximately $22,449 more per year in benefits and aid, compared with a household headed by a high school graduate. With about 3,000 students denied diplomas per year since MCAS became a graduation requirement in 2003, he says we face a cost to society in the billions of dollars.
The National Research Council (NRC) of the National Academy of Sciences looked at vast array of accumulated evidence on test-based policies, including state graduation tests and policies that give teachers bonuses if their students’ scores go up (Hout & Elliott, 2011). The report concluded that test-based incentives increase teaching to the test (as Warwick Principal Habershaw described) and produce an inflated and inaccurate picture of what students know. It found that educators facing sanctions tend to focus on actions that improve test scores, such as teaching test-taking strategies or drilling students closest to meeting proficiency cutoffs, rather than improving learning. And finally, it concluded that high school graduation tests in particular have done nothing to lift student achievement but have raised the drop-out rate an average of 2%.
The most thorough independent national research also confirms a link between graduation tests and higher dropout rates. Dee and Jacob found that the tougher the tests, the more the dropout rate increased (Dee & Jacob, 2006). Warren, et al. (2006) found that graduation tests have caused the national dropout rate to increase by 40,000 students per year. (Another study by Warren, et al. found exit exams raise dropout rates but do not improve college going, employment or wages .) California’s dropout rate spiked in 2006, the first year students had to pass the state’s exit exam to graduate, with 24,000 seniors dropping out, more than twice as many as four years earlier (Williams, 2007). Texas introduced exit exams in 1992. Fifteen years later, Texas used test results to deny diplomas to a record 40,200 students in the Class of 2007 (Radcliffe and Mellon, 2007). Last year, in the wake of a major backlash from parents, teachers and school boards representing more than 85% of the state’s students, Texas legislators voted to dramatically scale back their high-stakes graduation exams, though clearly more needs to be done.
In Massachusetts, dropout rates went up sharply after the state MCAS test became a graduation requirement. More recently, concerted, multifaceted efforts by local districts and nonprofits have reversed the upward trend. However, black and Hispanic dropout rates are still more than three to four times that of white students. In 2013, the annual dropout rate for white students was 1.3%; for black students, it was three times that, at 3.9%. The Hispanic rate was 5.4%, or more than four times that of whites. Further, 11th and 12th graders who have not passed MCAS were more than 13 times more likely to drop out of school than those who have passed (MA DOE, 2013).
Students with disabilities have been hit particularly hard and make up a steadily growing portion of Massachusetts students who don’t graduate because of the MCAS graduation test. As noted by Kruger and McIvor (2013):
In 2002-03, high school seniors in special education were five times more likely to fail the MCAS requirements than their classmates in general education. In 2011-12, high school seniors in special education were 15 times more likely to fail the MCAS requirements than their classmates in general education. The MCAS graduation requirement has become an unintentional mechanism for preventing many students in special education from obtaining a high school diploma. Whereas students in special education comprised only 16 percent of all high school seniors in 2012, they nonetheless were 75 percent of the high school seniors not passing the state-mandated, MCAS-related graduation requirements.
Real progress has been elusive because high-stakes testing undermines rather than improves education. Untested subjects are ignored, while tested topics narrow to test-coaching programs. Since these tests are mostly multiple-choice, students focus on rote learning to identify correct answers instead of learning to think and apply their knowledge (Koretz, 2005).
Effective educators know they must look beyond standardized test scores to a variety of measures of student learning to improve their instruction and help students succeed. It’s something like driving a car. Safe drivers use the windshield, rear and side view mirrors, occasionally checking the speedometer and other gauges on the dashboard. The best educators know it’s absolutely essential to use the “windshield,” that is, look at the work students do in class every day. By watching students tackle math problems and reading their essays and research papers, teachers can see how students approach things, why they succeed or get tripped up. Then they can use that information right away. They can give feedback, shift their practices appropriately and steer students in a more successful direction.
Test scores add some useful information, like the speedometer, which needs to be checked periodically to avoid accidents or being ticketed for speeding. But they are not the most important or most helpful measures. A driver who looks at the speedometer and nothing else is going to crash or mow down innocent pedestrians in no time. The problem is, graduation exams and other high-stakes testing policies pressure teachers to focus on the speedometer and ignore other important measures.
Here’s how a teacher in a charter school in Boston, Massachusetts described her classroom. “Practicing the test is a ritual we repeat four times a year. That means four times each year the school shuts down for the first half of Monday and Tuesday as students silently grind through faux high stakes tests. Monday math. Tuesday reading comprehension. Wednesday the tests are scored. Thursday and Friday we teachers spend hours out of the classroom, examining tons of data generated by the tests and developing elaborate, individualized plans to increase scores for the real deal in March. That leaves very little time for actual learning or exchanging ideas” (Bloom, 2012).
In addition to problems with disparate impact, tests have “measurement error,” which means some children will fail even though they know the subject (Rogosa, 2001). Being able to take the test more than once helps, but does not solve this problem. There is also the well-documented problem of test anxiety: An accomplished student may freeze, not do well on the test, and be denied a diploma (Hembree, 1988). For this among other reasons, the professional Standards for Educational and Psychological Testing state that a major decision about a student should not be made on the basis of a single test score (AERA, 2000).
No one wants to see youth leave school without the skills and knowledge needed for success. Exam supporters say students shouldn’t get meaningless diplomas if they can’t pass the tests. But a student’s overall transcript, not a test score, is what makes a diploma truly meaningful and gives the most accurate picture of a student’s readiness for life after high school.
In fact, a major study released last week confirmed this in regards to standardized college admissions exams. Researchers analyzed the records of 123,000 students at 33 test-optional institutions over eight years (Hiss, 2014). Among their conclusions was that high school grades are much stronger predictors of undergraduate performance than are standardized test scores. Moreover, standardized testing limits the pool of applicants who would be successful in college. In other words, it narrows rather than widens the gates of opportunity to a diverse population of students. With more than 800 colleges already adopting test-optional or flexible admissions policies, these findings are likely to strengthen the trend toward colleges deemphasizing standardized test scores for admissions decisions.
The study’s principal investigator, William Hiss, former Dean of admissions at test-optional Bates College, commented on the study’s release: “Human intelligence is so multifaceted, so complex, so varied, that no standardized testing system can be expected to capture it.” It is striking that, as Rhode Island, Massachusetts and many states consider adopting the next generation of high-stakes standardized tests as a way to ensure that students are “college and career-ready” when they leave high school, colleges themselves are increasingly abandoning reliance on standardized tests scores to tell them who is ready for college.
The truth is that race and class performance gaps reflect more on what happens outside the classroom than inside. An analysis of high school test scores in Connecticut found socioeconomic factors alone account for about 85% of the variation in test scores in four subjects (Heffley, 2007). This mirrors years of studies that find family background predicts test scores best. If anything, this situation is getting worse. A recent paper by Stanford University Prof. Sean Reardon (2011) found the achievement gap based on economic status has grown 40% and is now double the size of the gap between blacks and whites. Established on this basis, the use of the NECAP test as a graduation requirement will have a huge discriminatory impact.
Unfortunately, rather than acknowledge the limitations and ineffectiveness of high-stakes testing, our federal and state education policy makers are doubling down on their test-driven strategies. But moving to new PARCC tests based on Common Core standards will not address the problems caused by high stakes (FairTest, 2013). The PARCC tests are too long and developmentally inappropriate for young children. They do not adequately assess higher order thinking and skills. The limited inclusion of “performance tasks” does not overcome pressure to focus on rote learning to prepare for multiple-choice and short answer questions. Because they too will be high-stakes, they will continue to cause narrowed curriculum, teaching to the test, and student disengagement and pushouts. Rhode Island, like the U.S. as a whole, needs to change course.
The good news is there are far better ways to assess students and evaluate schools. The nation’s most important example of an alternative, authentic approach to assessment is a short car trip away from here, in New York City. The New York Performance Standards Consortium uses a performance-based assessment approach, tied to project-based learning, which has been highly successful (2013). While the Consortium schools in New York City mirror the city’s student demographics, they graduate a higher percentage of students (much higher for English language learners and students with disabilities); their graduates go to college at higher rates; and those who enter college remain in their third year at rates that well exceed the national average (which includes all the children from wealthier families). Their teacher turnover rate is far lower, and they have far lower rates of student suspension.
This consortium should become a model – as the Providence Student Union has called for. With a similar system, Rhode Island could establish ways to ensure reasonable comparability so that ‘proficient’ in one system is ‘proficient’ in another. Methods for this task have been established. For example, Britain has multiple exam systems for entrance to university. Every year the tests must be calibrated because each college has applicants from different systems.
Thank you for considering my testimony. FairTest would be pleased to work with you and Rhode Island educators, parents and citizens to craft a different approach to graduation, one that would rely on local determinations of adequate achievement but that would establish methods to ensure the quality of the local determinations. We can be reached by phone at 617-477-9792 or email@example.com.
AERA. 2000. AERA Position Statement on High-Stakes Testing in Pre-K – 12 Education. American Educational Research Association. http://www.aera.net/policyandprograms/?id=378
Bloom, N. 2012, February 15. “We Need Problem Solvers, Not Test Takers.” [Web log post.] http://colabradio.mit.edu/we-need-problem-solvers-not-test-takers/
Borg, L. Feb. 14, 2014. “Schools show gains in math, reading.” Providence Journal Bulletin.
Dee, T.S. & Jacob, B.A. 2006. Do high school exit exams influence educational attainment or labor market performance? Social Science Research Network, April. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=900985.
FairTest. 2013. Common Core Assessment Myths and Realities. http://fairtest.org/common-core-assessments-factsheet
GoLocalProv. Feb. 14, 2104. “ACLU, Advocacy Groups Battle RIDE Over RI NECAP Scores.” http://www.golocalprov.com/news/new-aclu-advocacy-groups-battled-ride-over-ri-necap-scores/
Heffley, E. 2007. What do CAPT scores really tell us? The Connecticut Economy, Summer: 14-16.
Hembree, R. 1988. Correlates, causes, effects, and treatment of test anxiety. Review of Educational Research, V58, N1, 1988.
Hiss, W. 2014. Defining Promise: Optional Standardized Testing Policies in American College and University Admissions. http://www.nacacnet.org/research/research-data/nacac-research/Documents/DefiningPromise.pdf
Hout, M. & Elliott, S. Editors. 2011. Incentives and Test-Based Accountability in
Education. Committee on Incentives and Test-Based Accountability in Public Education;
National Research Council. Available online at http://www.nap.edu/catalog.php?record_id=12521
Koretz, D. 2005. Alignment, High Stakes, and the Inflation of Test Scores. CRESST/Harvard Graduate School of Education. http://cse.ucla.edu/products/reports/r655.pdf
Kruger, L. & McIvor, T. 2013. Graduation Requirement Disproportionately Harms Students in Special Education. Unpublished manuscript.
Lang, S. June 2008. “Ed reform must move beyond MCAS.” Commonwealth Magazine. http://www.commonwealthmagazine.org/Voices/Argument-and-Counterpoint/2008/Education/MCAS-should-not-be-a-graduation-requirement.aspx
Massachusetts Department of Education. 2013. High School Dropouts 2012-13 Massachusetts Public Schools. http://www.doe.mass.edu/infoservices/reports/dropout/2012-2013/
Radcliffe, J. & Mellon, E. 2007. TAKS tests cost 40,000 Texas seniors chance to graduate, Houston Chronicle (May 12)
Reardon, S. 2011. “The Widening Academic Achievement Gap between the Rich and the Poor: New Evidence and Possible Explanations.” In Richard Murnane & Greg Duncan (Eds.) Whither Opportunity? Rising Inequality and the Uncertain Life Chances of Low-Income Children, New York: Russell Sage Foundation
Reville, P. June 2013. “Seize the Moment to Design Schools that Close Gaps.” Education Week. http://www.edweek.org/ew/articles/2013/06/05/33reville_ep.h32.html