In December 2015, the cheekily named “Study of Maternal and Child Kissing (SMACK) Work Group” published a study titled “Maternal kisses are not effective in alleviating minor childhood injuries (boo-boos): a randomized, controlled and blinded study” in the Journal of Evaluation in Clinical Practice. The study’s authors reported that in their rigorously designed experiment, including two different control groups and a robust sample size of “943 maternal-toddler pairs recruited from the community,” maternal kissing of boo-boos promoted an increased “Toddler Discomfort Index (TDI)” at “5 minutes post-injury.” The study concluded that the “practice of maternal kissing of boo-boos is not supported by the evidence,” and recommended “a moratorium on the practice.”
Although the journal is real, the study is (of course) a spoof - a mocking jab at the cool data-driven objectivity of empirical studies taken to an extreme. The piece, intended to draw attention to pitfalls associated with the pursuit of Evidence Based Medicine, prompted a flurry of responses, including articles in Nature Magazine, Discover Magazine and Business Insider. The tone of these reactions was frequently indignant, the writers taking umbrage at the thought that satire should appear in a supposedly respectable and influential peer-reviewed clinical journal.
The lesson to be learned from both the article and the storm of commentary that it provoked is a powerful one - namely that our obsession with quantitative data, with objective properly controlled studies, with reducing complex situations and systems to discrete verifiable measurements, frequently misses the entire point. Treating a patient, or educating a student, is about relationships between people, occurring in the presence of academic content and within a dynamic system. Any “data” that fail to truly capture the nuances of these people and systems, or that attempt to reduce people and systems to easily evaluated categories of measurement, are at best useless and at worst harmfully misleading.
As an academic research scientist turned educator, I wholeheartedly embrace the importance of evidence. We need evidence to evaluate if our pedagogy is sound, if our students are learning, and if our schools are thriving. But we should also take care to ensure that the evidence we rely upon is derived from asking the right questions, thereby providing meaningful data that actually tell us anything about the quality of a student’s experience.
In my first weeks at GLP, I have been immersed in survey design. Our clients need to gather information from various stakeholders to inform strategic planning, and online surveys are a convenient mechanism to collect this data. But of course the experiences of students, parents, faculty and other members of a school’s community are entirely subjective, context dependent and relational. Thus, in order to make sure that our data is objectively meaningful and useful, we have to engage with the highly subjective and personal. We need evidence, in the form of concrete and fully elaborated examples, to support and explain the data. So how do we accomplish this goal? We take initial survey information and dig deeper. Focus groups, empathy interviews and shadowing allow us to unpack language, hear individual stories, uncover hidden insights and deeply probe the experience.
So what does it mean to make data-driven decisions? It means you consider context, you ask relevant questions in precise language, and you talk to stakeholders to ensure that you truly understand what their responses actually mean. If you don’t, then the data you end up with is probably meaningless. And you’ll start making crazy and untenable recommendations, like suggesting that mothers stop kissing their children’s boo-boos, “because the data say so.”