Make Teaching Easier
Teachers are asked to use data to inform instruction. But that’s much harder than it’s made out to be.
The school I taught at prided itself on its commitment to using data to inform instruction. As part of this commitment, I would meet with other seventh-grade English teachers after each assessment that our students took. Because the tests were online, the software would grade each multiple-choice question instantaneously, giving us teachers access to a wealth of data that we would use to guide our instruction. Using this information, we could group students together based on their ability levels, target individual students with extra help, and reteach concepts that our entire classes struggled with during the first go-round.
This worked perfectly—in theory. In practice, it was one step short of a total disaster.
As I see it, there are two main reasons why data-informed instruction fell flat in my school. The first was that the curriculum we used, Houghton-Mifflin Harcourt, or HMH, relied on outdated and non-research-based teaching methods. Readers may remember me discussing the “science of reading” movement that has sprung up in recent years, but if not, here are the basics: The skill known as reading is a combination of being able to decipher the words on a page and incorporating one’s background knowledge to make connections between concepts. In this view, reading comprehension is only partly a skill—that’s the phonics part—while the rest of it is a gradual buildup of background knowledge.
HMH did not treat reading as a gradual buildup of background knowledge. Instead, it relied on a faulty assumption that reading is a series of discrete skills such as recognizing the theme, identifying the main idea of a nonfiction passage, and comparing different types of literature with similar themes. It’s not that HMH was a particularly bad actor in this regard. Actually, HMH built its curriculum to comply with the Massachusetts state standards for English Language Arts. This meant that each unit, and therefore each assessment, was built around the standards outlined by the Commonwealth of Massachusetts. Each assessment would contain a dozen or so multiple-choice questions and one or two open-response questions. Each question would be tied to a particular standard, and each assessment would only test two to three standards. In theory, if a teacher did a poor job teaching a specific skill, their students would struggle with the test questions that were supposed to assess that skill. Meanwhile, for skills that students had mastered, the assessment reports would show that students did well on the test questions that assessed those skills.
But there was almost never any recognizable pattern. On every assessment that I can remember, a majority of students would get one question wrong for a particular standard, but, on a different question that tested the same standard, a majority of students would get the question correct. If reading comprehension were a skill, this would be impossible—or, at least, very unlikely. Yet it happened on every single assessment.
As a result, our regular meetings to discuss the data were uniformly useless. We would all ponder why students were struggling with one standard for one question but doing well on the same standard for another question. These meetings were supposed to endow us with concrete next steps and allow us to develop lesson plans to reinforce the skills students didn’t get. Instead, we invariably left more confused and unsure of our instructional mandate than we were previously.
This brings me to the second reason why we weren’t able to use data to inform our instruction like we should have been. Teachers are not trained to use data. Schools and districts talk a big game about having teachers recognize patterns and make decisions based on the information. But in reality, using data is much more complicated than it’s often made out to be—and teachers don’t really know much beyond the surface level.
Let me take a step back and look at the bigger picture of teachers in general. Teachers, especially since the pandemic, are often viewed as heroes. As a result, they are often embodied with superhuman qualities that they don’t really possess. For instance, many people assume that teachers are smarter than the average cookie. This is a faulty assumption. In the late-1990s, 30 percent of teachers scored in the bottom third of SAT scores. This has improved over time, but even as recently as 2015, 52 percent of all SAT and ACT takers scored higher than the average new teacher. In other words: It’s not that teachers are the dullest tools in the shed, but they’re also not the sharpest.
I saw this first-hand when I was teaching. Several of my colleagues struggled to pass the teacher licensure exams, with some scoring just above the minimum required score and others having to retake the exam several times. If teachers are barely passing the basic licensure exam, how can we expect them to recognize patterns in data and use them to inform their own instruction?
I never earned my master’s degree in education, so I can’t confirm this, but my hunch is that most of these teacher preparation programs focus too little on data to sufficiently prepare teachers to use data in their instruction. Colleagues of mine who were taking classes to get their degrees spent most of their time designing lessons. This is all well and good, but if you expect teachers to use data, shouldn’t you put them through the wringer with statistics courses?
But this wouldn’t really be much of an issue if the administrators were sufficiently trained on using data. If academic administrators knew how to use data, you could imagine a scenario where he or she would go over your class’s data with you and design interventions that could help you reinforce the concepts you needed to reinforce. But it’s not clear that administrators know how to use data any more than teachers do.
As a result of all this, teachers are often flying blind when it comes to teaching students based on what works. Data-informed instruction is a promising way of boosting students’ achievement, but neither teachers nor administrators really seem to know how to put it into practice. And yet, both teachers and administrators seem to think they know how to use data—hence our regular meetings where we reviewed data.
This is a pretty damning indictment of how we conceive of our education system.
I have a colleague who hammers home a consistent message to people he talks to: We have to design an education system to fit the three million-plus teachers who are currently serving American students. Education reformers have a habit of dreaming up lofty ideas of restructuring the whole system without recognizing the obstacles that get in the way. Instead, we should try making improvements that will help our current fleet of teachers—imperfect and underwhelming as they often are—do their own jobs better.
That same colleague advocates curriculum reform as a potential solution. Teachers spend hours each week scouring the internet to gather instructional materials to supplement the lackluster curricula that they’re delivering to their students. Yet the quality of these materials is usually quite suspect. There’s no reason why teachers should be doing this. For one thing, they’re gathering a bunch of materials that don’t really help their students learn. In addition, they’re spending hours doing something that the curriculum designers should have done in the first place. This takes time away from doing more valuable things like grading and developing relationships with students and their families.
Another part of curriculum reform that’s promising—and that is particularly relevant to this newsletter—is providing coherent, actionable data reports to teachers. Because assessments are usually online, teachers have access to more data than they ever had before, including how long students spent on each question, whether they went back into the text to check their answers, and which questions students struggled with the most. But as I laid out earlier, the curricula are often designed so poorly that the data don’t mean anything, and the reports aren’t that helpful for students. Curriculum designers should spend far more time developing useful software and helpful explanations of the data to teachers so that we’re not relying on teachers with poor data literacy to make instructional decisions.
One obstacle here is that teachers, like most other self-respecting professionals, don’t like to view themselves as cogs in the wheel. Therefore, they may resent being told by curriculum companies how they should adapt their instruction based on the data and assume that they can do it better. I’m not sure if there’s a way around this other than to emphasize to teachers that their time would be best spent doing things that teachers are best at, including classroom management and managing relationships with students and their families. This would require a minor shift in the way we conceive of the teaching job, but it would still likely ruffle some feathers.
There is no silver bullet to education reform. Especially in the U.S., where our education system is spread out over 50 states plus D.C., new ways of teaching or structuring education have to jump over enormous hurdles. This is why focusing on the level of the teacher holds the most promise. We should do what we can to improve the productivity of each teacher—and this means that we shouldn’t rely on teachers to interpret the tea leaves of their students’ assessment data and make informed decisions about how to shift their instruction. That’s something that really should be left to the people who truly know what they’re doing.
An informative article!! I do wonder how much of a challenge it could be for 'older' more experienced teachers? How many are willing to admit they do need help? I am not in the field, so I could be totally wrong, and all those teachers are not just "doing time" but are anxious to find more productive ways of teaching. Just talking as an "old man"; in the late 60's and early 70's, when I taught, we had nothing close to what you are talking about . Mostly I felt I was on my own, with some help from that more experience teacher in the next room. But nothing formal. Computers? What was that? Formal testing as you described it, I had questions at the end of each chapter and, especially as I look back, they were of little use for me in developing a compressive intervention plan. Mostly what many of us did was work at developing some healthy relationship with some of the students and with parents that cared enough to come on in and talk with us. And many of those kids all ready had motivations to learn. So in my way of thinking education has significantly improved, but as you share, I can see we still have a long way to go.
Love the idea of data. It’s really all that should be relied upon. Without adherence to data, politics quickly creep in and we all know to well where that takes us. Right where we are. Busted.