“I think I’ve had a bit of a breakthrough. I think I’ve just had my own lightbulb moment! I realise this is what you and Barry said to me about 6 months ago, but I’ve finally understood what you meant.”
I was incredily lucky to be awarded an RSC CERG teacher-researcher fellowship recently. I will be exploring further an area of chemical education that I find particularly interesting: how we can support students to understand tricky concepts.
One thing that’s apparent from my reading, and from my personal experience, is that sometimes these conceptual breakthroughs are hard-fought. Often the journey to mastery is a liminal one, with steps backwards as well as forwards. And certainly students need time to leave a concept for a bit, mull it over, revisit it and, possibly, see it in a different light.
Mentoring and qualitative methods
When I found out I’d been awarded the fellowship, I said that the thing I was most excited about was the mentoring aspect. Once again this week, I was reminded just how valuable this is.
I’m pretty comfortable with quantitative methods, even if I’m not an expert in them. They’re within my comfort zone. I do like an Excel spreadsheet, and I understand the need for comparisons and controls and statistical analysis.
But right from the start of this project, Suzanne and Barry talked about interviews and case studies, and how useful they might be. I took it all in, made sure I included interview permissions in my letters to students, thought about the logistics of it… then left it and carried on with everything else.
The ‘everything else’ was the bit I understood: quizzes to diagnose misconceptions, worksheets to help frame students’ thinking, and analysis of both these. And I have some interesting numbers, some of which I’ve talked about in my MICER poster.
So when I spoke to Suzanne this week, I told her that things were going fine. I’d had some ups and downs. Certain things hadn’t gone to plan, just as you’d expect with any research project, but overall I knew what I was doing, and where things were going.
Except for one aspect: the interviews.
I didn’t really know who to interview or what to ask.
Interviews: who, what, why and how
We were due to chat later that week, but in the meanwhile, Suzanne sent me this article for me to read. Around the same time, I was discussing Johnstone’s triangle with David Read, and whether it’s really only useful for teachers, rather than students (as someone suggested in a comment on his article ). (see footnote)
“I guess that’s what I’m trying to find out”, I said.
Because the whole point of this study is that I *think* that Johnstone’s triangle might be a useful thinking framework for students, and I *hope* that by helping them to explicitly make the links between levels, it’ll help them understand Chemistry better. But I don’t know this! Stuart Kime talks about the importance of ‘equipoise’ (for example in the EEF’s DIY Evaluation Guide). After all, if you *know* what the outcome of a study is going to be, there’s no point in doing it!
A few passages stood out for me when I read the paper from Suzanne:
while quantitative research focuses predominantly on the impact of an intervention and generally answers questions like ‘‘did it work?’’ and ‘‘what was the outcome?’’, qualitative research focuses on understanding the intervention or phenomenon and exploring questions like ‘‘why was this effective or not?’’ and ‘‘how is this helpful for learning?’’ The intent of qualitative research is to contribute to understanding.
So your question is slightly different: you’re still at the why and how stage, rather than the how much stage.
Quantitative research requires standardization of procedures and random selection of participants to remove the potential influence of external variables and ensure generalizability of results. In contrast, subject selection in qualitative research is purposeful; participants are selected who can best inform the research questions and enhance understanding of the phenomenon under study.
So this is less about control groups, and more about richness of responses and participants.
Quantitative research requires statistical calculation of sample size a priori to ensure sufficient power to confirm that the outcome can indeed be attributed to the intervention. In qualitative research, however, the sample size is not generally predetermined. The number of participants depends upon the number required to inform fully all important elements of the phenomenon being studied. That is, the sample size is sufficient when additional interviews or focus groups do not result in identification of new concepts, an end point called data saturation …[for example] as data are analyzed, the researchers may note that only positive experiences and views are being reported. At this time, a decision could be made to identify and recruit residents who perceived the experience as less positive.
What am I trying to find out?
So I really need to start from the question I’m trying to ask: is the worksheet useable? Is it suitable for students? Is it helpful? Did it do what I hoped it would do? Why didn’t some people like it? Which aspects didn’t they like? How could we change this?
But then also… can I observe them using it to make links they wouldn’t otherwise have made?
And suddenly, I understood why I was doing the interviews. It’s not about control groups and effect sizes. It’s not about Excel spreadsheets and bar charts. It’s not about confidence values and comparisons. It’s about finding out what I want to find out, and working out what is worth pursuing, and if so, how.
Both Suzanne and Barry had basically told me all this back in the autumn, when I’d started this project. And I thought I understood what they were saying. But as with all the best concepts, it took me a while to really get to grips with what they were saying. And I had to leave it for a while first, then come back to it and look at it differently.
So now I think I know what I’m doing. I have an idea of what I want to ask students in the interviews. Next step: email and arrange times to do them!
I really sympathise with the point of view expressed in the comment at the end of the article that “Johnston’s triangle and theory is for teachers, not students. Working memory limitations and undeveloped thinking in concepts hinder students capability to take in account two or more sides of the triangle, and some other relevant aspects too.”. But I’m hoping that my KS5 students have enough of a foundation in the subject in which I’m using it with them. I guess if it does turn out to be a useful tool, the next step will be to look at something that’s less familiar to them.