Thursday 24 November 2016

New GCSE Grade Boundaries - my thoughts

I am going to start this blog by making the point clear, it is impossible to accurately grade pupils on the new GCSE for Maths. Completely impossible. Anyone that tells a pupil that they have achieved a particular grade is at best making an educated guess and at worst is making something up. If there is any way you can avoid giving pupils grades, making predictions of pupils eventual grades or even talking about future grades with any stakeholder then you should take the opportunity and avoid it like it is a highly contagious illness.

That said, many schools are not giving departments and heads of maths the opportunity to avoid it. There are plenty of schools out there requiring staff to predict grades for pupils (some as low as in Year 7!), or provide current working grades. Even when schools don't require this, Year 11 pupils looking at the next stage are being asked for predicted grades in English and Maths from colleges or other post-16 providers. I have been in touch with many new department heads that are struggling to answer the demands of schools, parents and pupils with regards the new GCSE grading and so this post is designed to give some support and guidance for anyone who finds themselves in this unenviable position.

You will hear people say that you cannot grade at all for the new GCSE, and I can see where they are coming from (see paragraph 1!). I do believe that it is possible to make some educated guesses about what the landscape is going to look like - we do have a reasonable amount of information to work on and one thing mathematicians are good at is building models for situations with many variables. We just have to be clear about our modelling assumptions and how that affects the accuracy of the predictions from the model. Lets start with the information Ofqual have provided:


This is probably the most viewed guide that teachers and schools have with regards the new grading. The key line in this is actually 'Students will not lost out as a result of the changes'. That means that if you have a kid in front of you that is a nailed on C for the old GCSE, they are at least a 4 on the new. Similar for A and 7, and G and 1. Of course this doesn't help with the borderline kids, but it is somewhere to start. The most updated postcard also has this information:

What this means that if you are assessing pupils (mock exams or similar), once you have set the 4 and 7 boundary, you can set the 5 and 6 boundaries arithmetically. Although it doesn't say it here, I am reliably informed (he says, waiting to be shot down!) that the same is true for grades 2 and 3; they should be set equally between 1 and 4. The upper grades can also be calculated, using the tailored approach for grade 9. The tailored approach can be summarised as:

Percentage of those achieving at least grade 7 who should be awarded grade 9 = 7% + 0.5 * (percentage of candidates awarded grade 7 or above).

By my calculations on last years figures, this will mean nationally about 15% of the pupils awarded 7+ will be in the 9+ bracket, which will end up being about 2.4% of the total cohort (based on 15.9% A* and A in 2016 translating to a broadly similar proportion for 7+). Of course if your cohort is very different to national then it shouldn't be massively far out to apply the tailored approach to your A and A* figure from last year (if you have one - I don't as this is the first year for GCSE) once you have adjusted for differences in the starting points of the cohort. This means we can have a reasonable stab at a grade 9 boundary for any mock exam we set. The grade 8 boundary should then be set halfway between 7 and 9.

Using this approach it should be reasonable to generate some grade boundaries for a mock exam by looking at kids that would definitely have secured a C, A and G on the old GCSE exams, using their scores to set grade 4, 7 and 1 boundaries respectively, and then calculating the 9 and the others using the calculations Ofqual provides.

Another approach that we (and several other groups have employed) has been to combine papers with other schools all doing the same board. This has allowed us to use proportional awards to set the 1, 4 and 7 boundaries statistically rather than through moderation - although it is still a bit unclear as to precisely what proportion will be used for the 4. This is the approach that the PiXL club among others also used, although from some points of view with varying degrees of success.

This is all well and good for individual schools and cohorts, and setting retrospective boundaries when cohorts have already done mock exams, but what can we predict about the final exams? The true answer is very little, but perhaps not absolutely nothing. Using what we know it is possible to make some predictions about the likely distribution of the grade boundaries going forward, but with a very large margin for error built in, primarily because of the very different style that the assessment has which is very hard to quantify. We do know though that the balance of difficulty will shift in both sets of papers so that 50% of the Higher tier paper will be aimed at grades 7 plus, and similarly 50% of the Foundation tier will be 4+, which is between 10 and 20% increased on the current top two grades in each paper.

We also know about the shift of material so that the Foundation tier will assess some material that is currently only Higher, and some of the material currently on Higher will no longer be assessed on Higher. Factoring all of this in we can make adjustments on current boundaries to make educated predictions at new boundaries. I will start by looking at the AQA boundaries for last year:


These are the boundaries converted to percentages for last year, and from these we can make some sensible adjustments. Given that there is now no D grade material on the Higher paper, it makes sense that the award of 3 (there is still a discretionary award of 3 similar to the current E award on the Higher) will come down towards where the E is now - around the 8 to 10% mark. The grade 4 will then have to come down as well to reflect the fact that all the D grade material is gone. With the D currently at 17.7% it is reasonable to predict that the 4 value will fall somewhere in the range of 15% to 25%. The B grade at 53.1% will also come down to nearer the current C grade - this won't translate automatically into 5 or 6, but given that B falls between 5 and 6 then 5 is likely to come in in the high 20s or low to mid 30s, with 6 likely to fall in the mid to high 30s to low 40s.

It is almost certain that the grade 7 boundary will have to come down from 71% that the current A grade sits at. When you consider the loss of the D grade material which nearly all A grade+ pupils will be scoring well on, along with the increase in the amount of material at A grade/grade 7+ then one can justify quite a dramatic drop in the 7 grade boundary - with 50% of the paper at grade 7+ it is not outside the realms of possibility that the boundary for 7 will actually be below 50%. In reality something in the early to mid 50s is probably the most likely area for the 7 boundary, and almost certainly less than 60%. The 8 and 9 are probably the hardest to predict, because of the 9 calculation, and that 8 will be based on 9 and 7 together. It would be hard to see the 9 grade boundary being less than the current A* as this would defeat the whole reason for adding the extra grade into the top of the system. Currently 5.7% gain A*, so if 9 is going to halve this figure or better, then the expectation of a boundary somewhere between 90% and 96% would seem a fair prediction. If this is the case then the 7, 8 and 9 are going to be quite widely spaced, which is expected if they are going to allow distinguishing of candidates at the top end. If we take all of this into account, and apply to a total of 240 marks, we get boundaries somewhere around the ones below for the Higher tier:


I can see these being accurate to within 10 to 15 marks at a maximum, and significantly closer in some cases (points for me if I get any of the spot on!).

Turning our attention to Foundation, we can do a similar 'analysis'. There is no reason that the grade 1 boundary should have to change much from the current G grade (except of course pupils really struggling to access the paper!) and so pupils are still likely to need in excess of 20% to be awarded a grade on Foundation (or perhaps a short way below). The most interesting here is the grade 4 boundary, with similar arguments for the 7 on Higher. There is reason to believe that this will have to come down significantly with the addition of extra, more demanding content in Foundation and the balance of the paper shifting to include more material at grades 4 and 5. A figure close to the current D grade percentage of around 55% seems rational, and it could even dip below 50% (I suspect that it won't as the balance of pupils sitting the Foundation paper instead of the Higher is likely to change so that there are more pupils that would score higher marks than currently sit the Foundation tier). Given this the grade 3 boundary and grade 2 boundary are calculable as equally spaced between the two. The grade 5 boundary at Foundation is probably the hardest to predict with any certainty as it likely to rely heavily on comparable outcomes with the Higher tier to set - if the 5 boundary for Higher has to be calculated then pupils awarded 5 on Foundation will need to be checked to make sure they are demonstrating similar understanding to those awarded 5 on Higher. I suspect it is likely to be above the current 66% for a C on Foundation, and have gone in on the low 70s. Based on this, my best guess for Foundation, with similar accuracy at all except Grade 5, looks like this:


A similar 'analysis' of the Edexcel boundaries yielded these results:


A big assumption here is that pupils continue to score better on Edexcel than on AQA, which by all accounts is not a good assumption to make. The tests from Ofqual suggested that pupils answered the AQA papers better than then Edexcel ones, so this second set of boundaries may well be less accurate than the others. Ultimately though, if you have nothing else you can use, and you absolutely must talk about grades etc with SLT, parents etc then this is the absolute best guess I can come up with; of course it remains to be seen how good a guess they are, so use these are your own peril as they come with precisely zero guarantees!

Sunday 6 November 2016

Methods of last resort 2 - Order of Operations

Teaching the correct order of operations is possibly one of the most debated topics for maths teachers. In my #mathsconf8 session I was asked 'what is my problem with BIDMAS' and proceeded to outline times when this acronym is redundant (e.g. 4 x 3 ÷ 2) or even downright wrong (4 - 5 + 6 would mistakenly be given as -7 rather than the correct answer as 5). Various diagrams have been mooted as the solution to this, and there are several examples below:

 
I have several issues with these diagrams, which can be summarised as:

(a) It isn't specific enough for all of the possible functions that can be applied to numbers (even those that include square roots don't involve higher roots, and no mention of sin, cos, tan, log etc)

(b) BRACKETS ARE NOT AN OPERATION (please forgive the shouting). This may seem like semantics but for me it is an important distinction - brackets are used to either alter or clarify the order of operations intended, but are not an operation in themselves (just a note on clarify, an example of this is 12 ÷ (3 x 4) needed clarity as without these brackets the answer would be 16 and not 1). If we are going to teach pupils to understand the maths they are doing then we need to be communicating understanding like this, and not allowing pupils to mistakenly believe that brackets are an operation themselves.

But this post is not about teaching correct order of operations (although that segue has outlined my thoughts on it quite nicely); this is about when you wouldn't want pupils teaching using the correct order of operations in the first place. The example I used in my #mathsconf8 session was:

673 x 405 — 672 x 405

Any mathematician is definitely not applying the correct order of operations in this situation; and is quickly writing down that this is just 405. With the advent of 'teaching for mastery' gaining ground in mathematics education pupils are being increasingly exposed to questions like this when looking at distributive laws, or factorisation but I am yet to see it, or anything like it, thrown into a lesson on Order of Operations as a non-example. There is good evidence out there now to back up the idea that non-examples are important in communicating a concept and so if we are trying to communicate the correct order of operations we should be highlighting cases like this as when applying the correct order of operations is not wrong, but is just wildly inefficient compared to use of the distributive laws (in this case the formal statement would be something like 673 x 405 - 672 x 405 = 405 x (673 - 672) = 405 x 1 = 405).

Some other examples of times when correct order of operations are an inefficient way to solve problems (particularly without a calculator) are:
  • 12 x 345 ÷ 6
  • 182 ÷ 92
  • √128 ÷ √32 (although this one does require some real mathematical understanding)
  • 372 + 845 – 369
I would be exploring all of these questions prior to teaching the correct order of operations, and then including questions like it in the deliberate practice on the correct order of operations to ensure that pupils are recognising when not to apply them alongside when they are absolutely necessary.