This chapter probes the challenges of measuring gaps in achievement and gaps in opportunities to learn. It begins with a review of our tendency to think too simplistically about cause-and-effect. The reasoning that people bring to the evidence of differences is often the cause of misinterpretation of gap measures. This quest begins with clarifying one’s questions. Scenarios of real-world gap questions include analysis of suspension rates, gaps in test scores, gaps in opportunities to learn, and bias detection of teacher assigned grades. Models of high-quality gap analysis from the Stanford Educational Opportunity Project show the power of showing results in context, of emphasizing the rate at which students learn, rather than their test scores.
This chapter is about the measurement of differences and the ways that people understand and communicate those measurements. This admittedly modest aspect of the larger topic has received less attention than it deserves. Oddly, everyone says they’re concerned about these gaps, but almost no one outside of the social sciences uses numbers to describe them. Consider that to be a warning sign.
When you’re working to turn data into evidence to support an argument, consider yourself to be both an architect and a builder. In the construction field, they have a term for this: design-build. You are both designing how to assemble data, and you are building it into a work of well-structured evidence that will stand up to criticism and persuade people. Three factors make gap analysis a hazard zone. First, many people build their evidence with poor quality elements. The data they use don’t mean what they think it means. They disregard imprecision and confuse noise with the signal they’re seeking. Second, they don’t allow for the possibility that someone who views their evidence from another vantage point might reach an entirely different conclusion. As builders, they’ve only viewed their creation from their own vantage point. Strong evidence gets that way after being viewed and critiqued from many angles. Third, the logic that links one observation to another may be flawed. Causality may be presumed, where only correlations exist. Bricks of data may have been connected by a faulty batch of mortar. Or those bricks may have been arrayed improperly. This is a hard-hat job.
For those who are viewers of gap analysis, it is also a risky proposition. You should regard evidence about gaps with care. Get some distance, bring your binoculars and examine the evidence from afar. Look at it from several angles. Then step closer and look for signs of skilled craftmanship. Just as a well-built house reveals the skill of the builder and architect, a poorly built structure will reveal its flaws if you look at it closely—corners that aren’t true 90 degrees and doors that don’t fit squarely in their frames.
In the examples that follow, we’ll show you evidence that’s flawed, and evidence that’s well built. We’ll share with you questions of varying quality, as well as evidence that at times doesn’t really address the question at hand. Put on your skeptic’s thinking cap. Toughen up your emotional armor. This is a conversation where moral and ethical issues—questions of fairness and equity—are front and center. Social justice questions and gap analyses are often intertwined. This makes a reasoned, logical approach to the measurement of gaps more important, even if it’s more difficult….
Epstein, David and ProPublica, “When Evidence Says No, But Doctor Says Yes,” Atlantic Magazine, February 22, 2017.
Hattie, John, Visible Learning: A Synthesis of Over 800 Meta-analyses Relating to Achievement, Routledge (2009), Initial National Priorities for Comparative Effectiveness Research. Washington, D.C.: National Academies Press. 2009-10-14. doi:10.17226/12648. ISBN 9780309138369.
Pearl, Judea and Dana MacKenzie, The Book of Why: The New Science of Cause and Effect, Basic Books (2018), 418 pgs.
Pogrow, Stanley, “How Effect Size (Practical Significance) Misleads Clinical Practice: The Case for Switching to Practical Benefit to Assess Applied Research Findings,” The American Statistician, March 2019, pages 223-234, DOI: 10.1080/00031305.2018.1549101
Pogrow, Stanley, Authentic Quantitative Analysis for Education Leadership Decision-Making and EdD Dissertations: A Practical, Intuitive and Intelligible Approach (second edition), International Council of Professors of Educational Leadership (2017), 323 pages.
Rich, Motoko, Amanda Cox and Matthew Bloch, “Money, Race and Success: How Your School District Compares,” New York Times (April 29, 2016).
The Chicago Guide to Writing about Numbers, Second Edition, University of Chicago Press, April 2015, 360 pgs.
Wasserstein, Ronald L., Allen L. Schirm & Nicole A. Lazar (2019) “Moving to a World Beyond ‘p <0.05’,” The American Statistician, (March 2019), pages 1-19, DOI: 10.1080/00031305.2019.1583913
Morgan Polikoff – Associate Professor of Education, USC Rossier School of Education
© 2024 K12 Measures / School Wise Press | Privacy Policy