By Marsha Sutton
I was wondering how to present, yet again, another story on student test scores. Thanks to the recent showing by Del Mar Hills Academy of the movie “Race to Nowhere” and the questions asked afterwards, I found my opening.
One complaint from an audience member about No Child Left Behind and the resultant testing frenzy was greeted with nods of sympathy, and anger toward the perpetrators of what has become an all-consuming obsession with testing.
Although California students were tested at various intervals before, the drive to test-test-test ramped up in earnest with the passage of the No Child Left Behind legislation under former President George W. Bush. Initially a worthwhile idea, NCLB reinforced efforts by states to test students on knowledge of standards-based curriculum and to examine data revealing how well students from various racial, cultural and societal demographic backgrounds were performing.
After all, how can we measure how well our students and teachers are doing unless we test them? But then the law of unintended consequences kicked in, and testing mania replaced common sense, sacrificing real learning in favor of rote memorization and substituting a shallow understanding of excess material instead of deeper knowledge that could translate into critical thinking skills.
But one positive, and quite significant, result of the testing madness is that educators and the public now pay more attention to the performance of groups of students who were all but forgotten before.
Examining the achievement of subgroups is a relatively new concept, and a critical one. Without the ability to see how traditionally low-performing demographic subgroups like Hispanics, English language learners and low-income students are faring, these students’ scores would be lost in the school’s composite score.
Ken Noah, superintendent of the San Dieguito Union High School District, said at a student achievement workshop last month that without the disaggregated data that separates test scores by subgroups, it would be far too easy to ignore the performance of individual groups of students.
Those scores would “get buried in the overall high-performance of the total,” he said, noting that the system “doesn’t let you off the hook.”
For example, Torrey Pines High School achieved an Academic Performance Index number of 870 this year – a high score that placed it among the top three comprehensive high schools in the county. Without the ability to look at the API numbers for separate subgroups, however, satisfied educators could easily believe that everything’s going swimmingly.
I’m picking on Torrey here, but the point applies to all schools. Lacking data on the academic performance of specific subgroups of students, educators would be unable to narrow, let alone even recognize, the achievement gap that exists between white and Asian students on the one hand and African-Americans, Hispanics, English learners, low-income students and students with disabilities on the other hand.
For Torrey Pines, this year’s overall API was 870. But for its subgroups, the breakdown was as follows:
|Students with disabilities:||601|
Quite a difference. Yet if you only see the overall school’s number, you’d be blind to the needs of a significant portion of the student population.
This isn’t to suggest that Torrey Pines isn’t doing remarkable work with its subgroups – quite the opposite in fact. As large as the gap may seem, the district has never worked harder on finding ways to address individual student needs in order to narrow the disparity.
“In every single measurable area and every single category and every single subgroup, gains were made, and that’s incredible,” said SDUHSD associate superintendent Rick Schmitt, speaking about the achievement of students district-wide at the October workshop.
Implementation of new programs and procedures for assessment in the district has allowed staff and teachers to zero in on which students need what kinds of support, said David Jaffe, SDUHSD’s executive director of curriculum and assessment.
The focus on assessment data allows teachers to measure success by student, by classroom and school-wide, Schmitt said. With this data, teachers can focus on individual students for intervention, remediation and support – and, through collaboration with colleagues, they can learn what techniques are more effective.
Schmitt said principals’ meetings begin now with a discussion about each student, “and that’s new,” he said.
The ability to review tests gives teachers clues as to what material is not being absorbed by students and which standards need to be taught in greater depth, Jaffe said. These assessments can also reveal which teachers might be providing more effective instruction. The purpose for this is not to punish the less-effective teachers but to share best practices and help all teachers gain insight into techniques and strategies that work better.
Resistance from teachers to the district’s data-driven approach initially focused on two main objections, Jaffe said: whether the data would be used to evaluate teachers, and whether teachers would still be able to practice their art and teach in their own style. He said almost all teachers have had their fears assuaged and now embrace the assessment measures after seeing the benefits.
One byproduct of the subgroup reports can reveal puzzling irregularities in student performance from school to school.
Canyon Crest Academy, the district’s other high school in the southern part of the district, earned an overall API of 894 this year and had only enough statistically significant students in three subgroups. Their 2010 API numbers were:
|Students with disabilities:||769|
The 168-point disparity between students with disabilities at Torrey Pines and Canyon Crest is striking but explicable: TPHS has a program for severely emotionally disturbed students while CCA does not, Jaffe said. And the reason there are fewer low-income students, English learners and Hispanics at CCA is because the majority of those students seem to prefer TPHS over CCA, despite the district’s efforts to attract a more diverse student body to Canyon Crest, Jaffe said.
At the San Diego County Office of Education’s eighth annual Achievement Gap Task Force news conference last month, SDCOE superintendent Randy Ward applauded the commitment all 42 school districts in the county have made to closing the achievement gap in their schools.
Although Bill Kowba, San Diego Unified School District superintendent and chair of the county task force, said there remain “unacceptable gaps,” there were improvements county-wide in English/language arts and mathematics for most subgroups. But the gains, although positive, still indicate that not enough students are proficient.
For eighth-grade English/language arts, for example, the percent proficient or advanced on the 2009 and 2010 California Standards Tests broke down as follows:
|Students with disabilities||21%||28%|
It’s hard to say if these figures are depressing or encouraging. Growth? Yes. But success? Clearly, no.
On the positive side, Kowba showed that the gap was nearly eliminated in the pass rate for the California High School Exit Exam and said that many low-performing schools had made enough gains this year to exit the dreaded Program Improvement status.
So we examine, re-examine, fund new data analysis systems, sort and select the data in new and different ways, compare and contrast, and drill down to individual students. The ostensible reason – to address the needs of struggling groups of students – is sound. But what have we sacrificed in the process?
That little bit of testing that was meant to determine how well kids are learning has turned into a monster fixation that has spawned a dramatic growth in data processing departments and kept academic statisticians busy for life.
Eliminating the achievement gap is a societal priority. The way to narrow the gap is to examine the achievement of groups of often overlooked students and address their specific needs. Yet we can’t evaluate scores and judge progress unless we test. And students are clearly over-tested, teachers object to being judged on the results, and districts place far too much importance on the outcomes. Hence we find ourselves in this bewildering circular predicament.
Too much testing is bad, but so is the achievement gap. How we scale back on testing without losing the ability to help real students in need is the million-dollar question.
Marsha Sutton can be reached at