Ok, I was wrong last time, this is definitely the most dangerous blog I have posted; if by dangerous I mean fraught with the capacity to be wrong and inconvenience a lot of people. So I will preface by saying I am very sorry if you base anything off of this post and it turns out to be wrong; we are all just guessing here really and guesses can go wrong. Still if it helps people clarify their own thinking, or supports people that wouldn't otherwise have a way of meeting the demands of their senior teams or other stakeholders then I suppose it is worth a little egg on the face if it turns out wrong. So here is the story of our Christmas Year 11 mocks and grade boundaries:

We sat mock exams just before the Christmas holidays,which meant that by about a week after we returned from Christmas I had pretty much all of the results from our 285 pupils (of which about 250 or so actually sat all 3 papers). By this time I also had the results of one other school with about 180 pupils in Year 11, with about 160 that had sat 3 papers, and another much smaller school that were only going to sit two papers. I used a similar process that I had at the end of Year 10; I apply a scaling formula to the Foundation paper to make it directly comparable with Higher which has worked well in the past, and then applied the proportions and other boundary setting details which have been well publicised by the exam boards and great people like Mel at @Just_Maths. This led to this set of boundaries, which we applied to our pupils:

I wasn't completely enamoured with these - I knew for example that the 9 and 8 were lower compared to where I expect them to be in the summer, and in general I thought that maybe all of the higher scores were a little low (although as you go down the grades I expect them to be closer to the real end values in the summer of 2017). I did like the Foundation ones, they seemed to sit well with what I was expecting. Given that pupils still have 5 months before they sit the real thing though, I thought these were acceptable for now. At the time I couldn't make them public, as our pupils were not given their grades back until their mock results day today.

Literally two days after we had inputted mock grades, AQA released the population statistics for the cohort. I was pleased to see that our Higher pupils had scored above average compared to the population, and our Foundation had scored lower. I took this to mean that our tiering choices were about right, although as any good statistician knows making judgements based on averages alone is a dangerous thing to do and we did have to look at the pupils at the lowest end of the higher paper scores as we had a large range of values.

Although we had already set boundaries I work with a group of 5 other schools, many of which were doing their mock exams after Christmas and so would be needing boundaries - originally the plan would be to collect all of their results and set the boundaries (which would have given us a cohort over 1000, and so had at least some hope of being reasonable). With the support of some excellent colleagues who will remain nameless I managed to get hold of some data about the population rankings that were attributed to certain scores for Higher and Foundation. This allowed for the setting of the grade 7 and 9 (and therefore also 8) at Higher, based from last year's proportions and the tailored approach as outlined in the Ofqual documentation as well as the grade 1 at Foundation. The grade 4 proved more problematic, as there was no detail about how the Higher and Foundation rankings compared to each other (I am reliably informed that it is impossible to accurately do this without the prior attainment from KS2, although my scaling formula does seem to produce quite similar results).

I was able to get hold (from a source who will definitely remain nameless) of the proportions of C grades that were awarded to 16 year olds last year for the separate tiers and based on this I was able to map out the separate values for grade 4 on the Higher and Foundation tier. This also allowed the setting of the 5 and 6 on the Higher tier, and 3 and 2 on the Foundation tier. Although it is still up for consultation (I believe), I also awarded the 3 using the approach that has been used in previous years for setting the E grade boundary on Higher, namely halving the difference in the grade 4 and 5 boundary, and then subtracting this from the grade 4 boundary.The trickiest one was actually the 5 boundary on Foundation, as there is no real guidance over this one; in the live exam I believe this will be set based on comparison of pupils scripts and prior attainment (although if anyone knows more about this I would be happy to be corrected). In the end I did have to make a bit of educated guess work with comparison back between my own papers, and ended up with boundaries for the whole AQA cohort that look like this:

I was quite pleased with the similarity of these to our boundaries, although it would appear my scaling formula is a little harsh to the Foundation pupils for mock exams (it does work quite well for real exams though). At this point though I should pass on some major health warnings and notices:

We sat mock exams just before the Christmas holidays,which meant that by about a week after we returned from Christmas I had pretty much all of the results from our 285 pupils (of which about 250 or so actually sat all 3 papers). By this time I also had the results of one other school with about 180 pupils in Year 11, with about 160 that had sat 3 papers, and another much smaller school that were only going to sit two papers. I used a similar process that I had at the end of Year 10; I apply a scaling formula to the Foundation paper to make it directly comparable with Higher which has worked well in the past, and then applied the proportions and other boundary setting details which have been well publicised by the exam boards and great people like Mel at @Just_Maths. This led to this set of boundaries, which we applied to our pupils:

I wasn't completely enamoured with these - I knew for example that the 9 and 8 were lower compared to where I expect them to be in the summer, and in general I thought that maybe all of the higher scores were a little low (although as you go down the grades I expect them to be closer to the real end values in the summer of 2017). I did like the Foundation ones, they seemed to sit well with what I was expecting. Given that pupils still have 5 months before they sit the real thing though, I thought these were acceptable for now. At the time I couldn't make them public, as our pupils were not given their grades back until their mock results day today.

Literally two days after we had inputted mock grades, AQA released the population statistics for the cohort. I was pleased to see that our Higher pupils had scored above average compared to the population, and our Foundation had scored lower. I took this to mean that our tiering choices were about right, although as any good statistician knows making judgements based on averages alone is a dangerous thing to do and we did have to look at the pupils at the lowest end of the higher paper scores as we had a large range of values.

Although we had already set boundaries I work with a group of 5 other schools, many of which were doing their mock exams after Christmas and so would be needing boundaries - originally the plan would be to collect all of their results and set the boundaries (which would have given us a cohort over 1000, and so had at least some hope of being reasonable). With the support of some excellent colleagues who will remain nameless I managed to get hold of some data about the population rankings that were attributed to certain scores for Higher and Foundation. This allowed for the setting of the grade 7 and 9 (and therefore also 8) at Higher, based from last year's proportions and the tailored approach as outlined in the Ofqual documentation as well as the grade 1 at Foundation. The grade 4 proved more problematic, as there was no detail about how the Higher and Foundation rankings compared to each other (I am reliably informed that it is impossible to accurately do this without the prior attainment from KS2, although my scaling formula does seem to produce quite similar results).

I was able to get hold (from a source who will definitely remain nameless) of the proportions of C grades that were awarded to 16 year olds last year for the separate tiers and based on this I was able to map out the separate values for grade 4 on the Higher and Foundation tier. This also allowed the setting of the 5 and 6 on the Higher tier, and 3 and 2 on the Foundation tier. Although it is still up for consultation (I believe), I also awarded the 3 using the approach that has been used in previous years for setting the E grade boundary on Higher, namely halving the difference in the grade 4 and 5 boundary, and then subtracting this from the grade 4 boundary.The trickiest one was actually the 5 boundary on Foundation, as there is no real guidance over this one; in the live exam I believe this will be set based on comparison of pupils scripts and prior attainment (although if anyone knows more about this I would be happy to be corrected). In the end I did have to make a bit of educated guess work with comparison back between my own papers, and ended up with boundaries for the whole AQA cohort that look like this:

I was quite pleased with the similarity of these to our boundaries, although it would appear my scaling formula is a little harsh to the Foundation pupils for mock exams (it does work quite well for real exams though). At this point though I should pass on some major health warnings and notices:

- These boundaries are NOT endorsed by AQA, and they will rightly maintain that it is impossible to set grades or boundaries for exams without prior KS2 pupil data. Although this does use data available on the portal from the AQA portal, it is only my interpretation of it.
- There are two big assumptions used to make these boundaries, which are unlikely to completely bear out in reality. In particular, there is an assumption that the proportions highlighted in the Ofqual document are going to pretty much repeat from last year to this year; i.e. that the cohorts from Year 11 in 2016 and 2017 are pretty similar. In reality we are told that Year 11 2017 have slightly higher prior attainment than those in 2016 (although the published data does say that the two are not directly comparable). The other major assumption is that the proportions of grade 4 at Higher and Foundation will roughly match the proportions of grade Cs awarded at Higher and Foundation last year. This assumption is certainly unlikely to be true, we are already hearing that schools are entering significantly more pupils at Foundation tier (myself included compared to the proportion I used to enter in my previous schools), which is likely to raise the quality of candidate at both Foundation and Higher tier. If this is the case for the current mock data it would have the effect of lowering the Foundation boundaries (although they seem to fit too nicely for me to believe they will go lower - just a gut feeling though) and raising the Higher boundaries (which seems likely in reality).
- We mustn't forget that a lot can happen in the next 5 months, and I would expect most of the cohort to improve their scores; I would still expect the 9, 8 and 7 to be noticeably higher than these values in the summer, although I don't think the 4 boundary will shift up by as much as some people might think. In reality these boundaries are useful in the very specific circumstance that a pupil has completed all 3 papers from AQA practice set 3, and that they have done so after about a year and a bit to a year and half of GCSE course study.

So that is our story, up until about 2 or 3 hours ago. If it helps people then great; if you disagree then fine; if you use it and it turns out wrong, well you were warned...

Can you elaborate on why the shift of students from H to F will lower the grade boundaries and not increase them?

ReplyDeleteThe current boundary of 103 for 4 was based on the same proportion of pupils getting 4 on Foundation this year as got a C on the Foundation last year. If we assume a greater proportion of people have achieved 4 on the Foundation compared to C last year, this necessarily means that the 4 boundary is lower than 103. The same logic extends to the summer exams.

ReplyDeleteI am correct in reading 58% on the foundation paper for a Level 5? You are working on 240 marks across the 3 papers?

ReplyDeleteI am, and if I tell you only the top 10% of Foundation candidates scored that or above; it seems reasonable to award the top 10% of candidates a grade 5.

DeleteHi Peter

ReplyDeleteCan you tell me what proportion of 16 year olds got a grade C on each tier?

Thanks