What data skills should K12 leaders expect from their planning teams

After reading dozens of district LCAPs this year, I’m appalled and worried. What appalls me most is not their length or the poor writing that prevails. (If you doubt me, take your own district’s LCAP in hand and just identify every sentence that’s expressed in the passive tense.) What appalls me most is the prevalence of sloppy logic and weak quantitative skills. Both are fundamental to solid plans. When either is missing, plans lack solidity. Like ships with holes in their hulls, these plans are likely to sink.

Let me offer a few examples. The illogic of a plan becomes apparent to me when I see the same remedy prescribed for every ailment. Are too many 3rd graders lagging in reading? The answer: professional development for teachers in grades K, 1 and 2. Are too many 7th graders struggling with algebra concepts? The answer: professional development. Are English learners not advancing to fluent English proficient status fast enough? The answer: professional development. In medicine, every ailment cannot be solved by the same remedy. The era of medicine shows in the Wild West, where scam artists sold cure-alls for rheumatism, arthritis and the common cold should have marked the end of pseudo-science. Unless someone can convince me that professional development has magical powers, I remain skeptical that professional development can be a tenable answer to everything. (If you think I’m just a doubting Thomas, let me point you to education researchers who wonder if PD is a tenable answer to anything.)

How about categorization errors? In a moment when we have mountains of data on individual students, they are still too often considered to be at-risk for this or that based on the color of their skin or ethnic origins. Assigning the attributes of the group to every one of its members is a logic error. And just as illogical is the assigning to all members of a group the average attributes of them all.  (In the sciences, this type of error is called the ecological fallacy.) For example, insurance companies used to quote us car insurance rates based on our zip codes. The premise: those who live in one zip code share driving habits. Now those quotes reflect our individualized behavior. Why can’t education make this transition?

How about binary classification errors? The impulse to classify students as either “A” or “B” (at-risk or not-at-risk) abound in districts’ and schools’ plans. Students’ mastery of math is all too often described as “meeting standard” (or not), rather than interpreted in scale score points with an allowance for the imprecision of the assessment itself. How easy it is to consider someone or some group to be “this” or “that.” The tempting simplicity of binary categories should not dominate LCAPs. We should expect measurement of a scalar sort, especially when forecasting improvements. Plans should assert that a certain investment of money and time, should be likely to produce a degree of improvement over a certain amount of time. Yet plans are full of unqualified, binary predictions of X percent of students crossing a line called “meeting standard.”

How did things get this sloppy? This is a topic for a book, not a blog. But I think one contributing factor to the prevalence of reasoning errors and mismeasurement is that superintendents don’t know the skills and knowledge that planners should possess. So they hire people with eyes wide shut, and then give them planning responsibilities without knowing if they have the measurement skills needed. If leadership doesn’t set a standard for knowledge and professional competencies in the zone of quantitative reasoning skills, then good plans only get written rarely. They remain exceptions.

The good news is that these skills and competencies are anchored in a useful set of guides, “Master Standards for Data Use” that were written by 27 senior educators in a multi-state consortium. They were funded by the Institute for Education Science, and they published their findings in 2015 and 2016. I think they are the key that superintendents should reach for if they want to know the knowledge, skills and professional behaviors that should be mastered by their people. In addition, the authors of this important work have defined role-specific versions of the “Master Standards for Data Use.” There’s one for data leads, one for teachers, and another for school and district leaders. Indeed, everyone does not need to know everything.

When district leaders do their next search for assessment directors, accountability directors and data leads, I urge them to incorporate these standards into their requirements for each position. I urge them to apply skills tests to verify that their candidates are able to bring those skills to the job at a professional level. And I urge superintendents themselves to learn some new data moves, and raise their own skills. I suggest you take Jim Popham’s warnings about assessment illiteracy seriously. Here’s what he wrote in the preface of his book, The ABCs of Educational Testing: Demystifying the Tools That Shape Our School.

“School administrators today who know little about educational testing are almost certain to make major measurement-related mistakes themselves or, in the absence of sufficient measurement acumen, are likely to allow others to make measurement-linked mistakes. This is not an era wherein assessment ignorance on the part of a school administrator can be countenanced.”  (Source available courtesy of Corwin.)

Let me declare my self-interest in this cause. I lead a team that brings up the level of assessment savvy, data literacy and analytic reasoning skills of school and district leaders. So I am both passionate and self-interested in this cause.

No longer must superintendents rely upon resume claims alone when hiring key staff and leaders charged with interpreting quantitative evidence. Now they can turn to the “Master Standards for Data Use” to see what they should expect from their team, and themselves.

Share this post