Using Educational Research to Inform Our Daily Practice: Caveats and Opportunities

“Educational research is the beginning of a conversation, not the end of one.”

I’ve been saying that phrase quite a lot these days, both in my daily work at the University of Mississippi and in the workshops I have been facilitating at other campuses.

Although I’m not often given to pithy maxims, this one seems to resonate with folks. Here is what I mean:

  1. Educational research (e.g., research on the effectiveness of teaching strategies) can only ever provide us with a limited perspective on one aspect of a key question. No paper is *ever* going to have all the answers to our teaching questions or even a single, complete answer to our inquiries about teaching practices simply because there are too many variables when it comes to the classroom. It would be unreasonable to expect any single paper or even a cluster of papers to provide this for us.
  2. No study is perfect. Even good papers can have methodological oversights. Even great papers will be limited in scope (see #1).
  3. No matter how much we would like to have neatly packaged answers about pedagogical conundrums, the truth of the matter is that such solutions simply don’t exist. Even the debate about lecturing vs. using active learning in the classroom is far more nuanced than we often admit.
  4. Do numbers 1, 2, and 3 mean that we should discount the value of educational research for informing our teaching practices? Absolutely not!

If you are a faculty member or graduate student for whom educational research is the primary means of scholarly communication, promotion, tenure, and professional advancement, then by all means dissect methods, stats, p-values, and much more. This work is very valuable, and I don’t want to discount it. I enjoy talking to my fellow medievalists about which manuscript of The Canterbury Tales is more representative of Chaucer’s vision for the project. At the same time, it’s important to acknowledge that all of this is inside baseball. When I think about what I want colleagues and students outside of Medieval Studies to know about The Canterbury Tales, there are far more relevant questions I want them to consider.

If, on the other hand, we are thinking about an instructor of a college-level course in her or his discipline who is seeking out more effective ways of teaching, I would say that our concerns are a bit different. What does this colleague need to see in a study to give her or him confidence not that an effect is statistically significant but instead enough confidence to give the strategy a try in the classroom, to get feedback from students, to refine and try again? Must the bar for educational research be so high that it prohibits experimentation in our courses? I don’t think it needs to be, but conversations in the higher ed press and on social media sometimes set it at a place where it can make the work sometimes feel unapproachable or perhaps even unusable for those who are doing the work of teaching day in and day out. I think that’s a missed opportunity on many levels and it can lead to an over-reliance on intuition and assumption rather than evidence when choosing pedagogical strategies.

This is not to say that each and every study rises to the level of confidence I just described. What I’m proposing here is that we might advocate for one use of educational research as providing us with evidence for testing something out in our classrooms, and to make this determination we will still need to interrogate the studies, just with a slightly different eye.

These issues came into sharp focus two weeks ago when a study was published in Proceedings of the National Academy of Sciences arguing that a) students learned less in a class session taught entirely through lecture than they did in a session taught through active, engaged pedagogies, and b) they felt like they learned more in the lecture session. Immediately the lines in the rhetorical sand were drawn. Eric Mazur, Harvard professor and developer of the pedagogical strategy called Peer Instruction, said “This work unambiguously debunks the illusion of learning from lectures.” Others joined this chorus. Of course that’s not true. No single study could *unambiguously debunk* anything.

On the other side of the conversation, some researchers looked very closely at the methodology and the statistical analysis in the paper. Clarissa Sorensen-Unruh wrote a very thoughtful blog post about some of the issues she found at this level of analysis. I learned a lot from her and from others parsing the study with this degree of detail, but at the same time I can’t help but think that we see more of the trees than the forest when we do this. What is there for individual teachers to take away from a study when we move in this direction? If every paper received this level of scrutiny, the conversation would either proceed slowly or not at all. To be clear, serious mistakes should rule a study out of our consideration, but–beyond that–how do we decide whether something is worth trying?

What do we make of all this? If we think of educational research as the beginning of a conversation rather than the end of one, then we’ll think about whether or not we are convinced that a given strategy might be something we could try out in our courses. Then we’ll look at our own results, compare them to other research that comes out, and continue the conversation from there. For the study I’ve just been describing, I’m intrigued enough to test out a similar approach and to suggest that others consider trying it too. We need to take what we find valuable and give it a go. Regardless of whether something is shown in a study to work well or not as well, it may work (or not work as the case may be) in our own courses, and that is something worth exploring.

The same, by the way, is true of larger research programs, like the kind that led to models such as Carol Dweck’s mindsets and Angela Duckworth’s grit. These frameworks were all the rage in educational circles for years, but recent meta-analyses have called into question the efficacy of growth mindsets and the predictive power of grit. Both threads of research are important. Do we throw grit and mindsets out wholesale because of this new work? No! But we approach them with caution, take the pieces of them that may be valuable for our students, and we evaluate them for ourselves.

As someone who has benefited greatly from the many outstanding research studies on teaching and learning in higher education, I hope that this post will be taken in the spirit of debate. We need more discussions of teaching in higher ed, not fewer, and I hope educational research can play a major role in these conversations.

**I’m grateful to Sarah Rose Cavanaugh, whose Twitter thread on the PNAS study spurred me on to write this post.