Saturday 15 April 2017

The importance of evidence informed practice

I wanted to title this post the importance of evidence informed practice, but I cannot put bold words in the title unfortunately. There has been much discussion about this idea on edu-twitter recently, some of which I have involved myself in, and so I thought I would take the time to flesh my points out more fully in a blog.

One of the quotes that I have seen that created a bit of controversy around this issue was used in the Chartered College of Teaching conference in Sheffield. The session delivered by John Tomsett, Head teacher of Huntington school in York and author of the "This much I know..." blog and book series. The quote was taken from Sir Kevan Collins, CEO of the Education Endowment Foundation:

"If you're not using evidence, you must be using prejudice."

This quote caused quite a bit of disagreement, with some people very much in favour of the sentiment, and some taking great exception to the provocative language used.

I had an interesting discussion on twitter about this quote, with my interlocutor seeming to hold to the viewpoint that because all children are different that any attempt to quantify our work with them is best avoided. Their argument goes that the perfect evidence-based model for classroom practice is an unobtainable dream, and so the effort to create one is wasted. To me the point of evidence informed practice is not to try and create the perfect evidence-based model, but rather to ensure teachers can learn from the tried and tested approaches of their peers; to stop them falling into traps that people have fallen into before, and to allow teachers to judge the likelihood of success of different possible paths. To bring another famous quote into the mix, "If I have seen further it is by standing on the shoulders of Giants." (Isaac Newton). In the same vein, we don't every new teacher to have to reinvent the wheel, we want them to be able to learn from those who have faced similar challenges and found solutions (or at least eliminated possible solutions).

One of the accusations that has been levelled at educational researchers is that they are 'experimenting on kids'. This is one of my least favourite arguments against evidence informed practice as its proponents must either be ignorant of how researchers operate or be feigning ignorance in order to make a point that isn't worth making. At some level everything we try in the classroom has a risk of failure; even the best practitioners don't get 100% understanding from every child in every lesson. The big point here though is that no one goes into the classroom with anything other than an expectation that what they are going to do is going to work, and this goes for researchers as much an any other professional, and is true in fields other than education. It would seem that some of the critics of evidence-based practice see researchers as a bunch of whacked-out lunatics wanting to try their crazy, crackpot theories out on unsuspecting pupils. In fact most researchers are following up on promising research that has already been undertaken, and so in theory their ideas should have a greater chance of  success than a teacher whose view of the classroom is not informed by evidence. Even when researchers are trying totally new approaches, they are tried from a strong background and with a reasonable expectation of success. It is precisely the opposite of the view that some seem to hold, and in fact it is those who don't engage with educational research that are more likely to have some crackpot idea and then not worry so much about its success. 

One of the situations I posed on twitter was the situation of the teacher new into a school, and therefore taking on new classes. Let us further suppose that said teacher is teaching in a very different setting to that which they are used to; perhaps a change of phase, a change of school style (grammar to comprehensive may well become more prevalent), or even just a change of area (leafy suburb to inner-city say). Now this teacher has two choices in order to prepare for their first day in their new classroom. Their first choice is to read something relevant and useful about the situation they entering, They could talk to teachers in their network that have experience in their situation, including in the school they are going to be working. They could inform themselves about the likely challenges, the likely differences, and the ways that people have handled similar transitions successfully in the past and then use this to make judgements about how they are going to manage this change. Alternatively they could not, either sticking blindly to their old practice, or making up something completely random. I know which one I would call professional behaviour. 

When faced with this situation, the person with whom I was having the conversation sidestepped this choice and suggested that all would be well because they have a teaching qualification. Of course this ignores what a teaching qualification aims to do; the whole point of a teaching qualification is to lay down patterns for this sort of professional practice. This is one of the big reasons I was very much against the removal of HEI from teacher training. The idea of teacher training is to try and provide this dual access to practical experience through school placement along with skills in selecting and accessing suitable research and evidence from outside of your experience to supplement the gaps in your own practice. A teaching qualification has to be the starting point of a journey into evidence-informed practice, not the end point. One doesn't emerge from the ITT year as anything approaching the effective teachers that they have the potential to become; and the only way they will do so is by engaging with the successful practice of other teachers and using this to develop and strengthen your own practice and experience.

One other criticism levelled at those engaging with research and using it as the backbone of their practice is that the outcomes measured in order to test the success of the research are very often the results of high-stakes tests, and that these may not be the most appropriate measures of success. I have some sympathy with this point of view; I can see for example why people would baulk at the idea that the impact of using Philosophy for Children can and should be measured by their combined KS2 maths and English scores, which is what is happening in the EEF funded trial. However if we bring it back a notch we should ask ourselves what we are trying to achieve from the intervention. Ultimately I could argue that the purpose of any intervention in school is to try and make pupils more effective at being pupils, i.e. being able to study and learn from their efforts. Whether the intervention is designed to address gaps in subject knowledge, problems with learning behaviours or improve development in a 'soft-skill', the eventual intent is the same; that these pupils will be able to take what they have learned and use it to be more successful pupils in the future. Now I am not going to stand up and say that the way we currently measure outcomes from education is an effective way of doing so, but what I will say that is that however we choose to measure outcomes from education, any intervention designed to improve access to education has to be measured in terms of those outcomes. I am also not going to necessarily stand here and say that every single thing that goes on in schools should be about securing measurable outcomes for education (and I know many educators who would make that argument) but then I would argue that these things should not be attracting their funding from education sources. If an intervention is expected to benefit another aspect of a pupil's life, but it is not reasonable to expect a knock-on effect on their education (and when you think about it like that, it becomes increasingly difficult to think up sensible examples of interventions that might fit that bill) then it needs to be funded through the Health budget, or the Work and Pensions budget, or through whichever area the intervention is expected to impact positively.

Schools are messy places, subject to a near-infinite number of variables, very few of which can be controlled. It is virtually impossible to ensure that any improvement in results is due to one specific intervention; often several factors are at play. Does this mean, however, that we shouldn't experiment in the classroom, provided we have a reasonable expectation of success? Does this mean that we shouldn't attempt to quantify any success that we have that could, at least in part,be attributed to the change we made? Does this mean that we shouldn't share the details of this process, so that others can adopt and adapt as necessary, and then in turn share their experiences? To me this is precisely how a professional body of knowledge is built up, and so if teachers are going to lay claim to the status of 'professionals' then engagement with this body of knowledge has to be a given (provided they are well supported to do so). If you have the support to access this evidence, and then simply refusing to do so, then I would argue you certainly are using prejudice; either prejudice against the idea of research impacting your practice at all, or prejudice against the teachers/pupils that formed the research from which you might develop. Prejudice has no place in a professional setting, and no teacher should ever allow their prejudices to stand in the way of the success of the pupils in their care.

2 comments:

  1. Great post, Peter. I'm still grappling with what "engaging with research" actually means for a busy teacher. This is helpful. This bit, I question:
    "Their first choice is to read something relevant and useful about the situation they entering, They could talk to teachers in their network that have experience in their situation, including in the school they are going to be working. They could inform themselves about the likely challenges, the likely differences, and the ways that people have handled similar transitions successfully in the past and then use this to make judgements about how they are going to manage this change."
    I would suggest that most teachers wouldn't know where to start with this. How to find something relevant and useful in the first place. And even if we do, how do we then actually apply that? Change in practice is a gradual thing, it's a big risk to throw out what you know has worked previously.
    I find I learn most from reading blogs such as yours, reading actual research is usually overwhelming but am I placing too much trust in sources I know little about?

    ReplyDelete
    Replies
    1. Hi Mark, thanks for the great comments. My big hope is that the Chartered College will make the sort of research I am talking about accessible to teachers, so that it isn't so daunting to teachers. I keep my fingers crossed that in time this is the way the profession goes. To be fair, I wouldn't suggest teachers completely throw out what they know worked previously, but if big changes in setting are happening, it would be naive to believe that practice doesn't need to change - if teachers are going to call themselves professional it is my belief that they need to embrace this realisation and take steps to manage this. Long term if enough teachers are plugged into research and blogs then they can signpost things for their new members of staff - I am always suggesting reading to my PGCE student and other members of my team. In terms of putting too much trust in sources you know little about, I think as always you have to judge whether you think ideas have a reasonable chance of success in your situation, and then try them. Whether they then work for you or not, feed that information back into the profession so that we learn and build together.

      Delete