Real security comes from having a secure base at home — Keir Starmer’s reckless and renegade decision to get Britain deeper into the proxy war against Russia is as dangerous as it is wasteful, writes SALLY SPIERS

LOOKING into the details of the now-abandoned algorithm used to predict exam results in England last week it was announced that students and teachers had squeezed a hard-fought and rapid U-turn out of the Conservative government.
The reversal of previous decisions in the A-level results fiasco in England followed earlier U-turns by both the SQA under the SNP in Scotland, and the Welsh Labour government.
In an already agonising and difficult year, the government had used an opaque algorithm that was not available for public scrutiny to predict grades for students who were unable to sit their exams due to the coronavirus outbreak.
The word algorithm here is being used to mean the use of statistics to predict and determine outcomes, rather than merely monitor and analyse distributions to better understand them. As in the old saying “lies, damn lies and statistics” — algorithms, or the systematised application of statistics, can be used to justify any political will imaginable.
In England, nearly 40 per cent of students received results that were a grade below their predicted grades, harming their future career prospects and causing many to miss university offers. The government released a report detailing the inner workings of the algorithm, showing how grades were assigned.
The algorithm was based on the principle that the predicted grades awarded by teachers are often too optimistic and that, while teachers often assign grades that correctly rank the pupils from highest to lowest marks, the marks predicted are on average too high.
As a result, the algorithm should downweight the predicted grades based on the historical results of students in the same subject at the same school or college.
Your mark awarded by the algorithm is a mixture of your teacher-predicted grade and an adjustment based on historical results. The relative weighting between these two things is determined by the number of people taking your subject at your college, with fewer people meaning that the outcome is skewed more heavily towards your predicted grades rather than the historical results.
The algorithm disproportionately benefited students at private schools, with the proportion of students awarded an A or A* rising by 4.7 per cent compared to a rise of 0.6 per cent for state sixth forms. The technical explanation behind this remains unclear — although private sixth forms tend to have smaller class sizes and probably therefore retained more of their optimistic teacher-predicted grades.
The A-level algorithm takes grade distributions in both A-levels and GCSEs of the same pupils for previous years, and adjusts this year’s predicted A-levels by comparing the GCSE results of this year’s cohort, and previous students. The GCSE algorithm does similar, but for earlier exams (ie SATs), which of course are not implemented in most private schools.
To give an example of how the weighting works, imagine a class at a school where 2 per cent of students got an A or higher in 2019 and 3 per cent of the 2019 students are predicted to have got an A or higher based on GCSE results. This year, 6 per cent of the current class were predicted to get an A or higher based on their GCSE results.
The algorithm then takes the difference between the predicted results this year and the predicted results in 2019 (6 per cent minus 3 per cent, making 3 per cent) and adds it onto the actual results from 2019 (3 per cent plus 2 per cent equals 5 per cent) to decide that a “historically weighted” 5 per cent of the current class will be awarded an A or higher.
In reality, this step of the algorithm depends heavily on how much past data is available for your subject at your college where people’s GCSE results can be matched to their A-level outcomes. If there is little of this “matched” past data then the grades that are awarded are much closer to the historical outcomes, so even if you may have gotten far better GCSE results than your predecessors the distribution of your class’s A-level results will end up being close to theirs.
Upon being assigned an initial grade based on the percentages of students that get each grade, students are then assigned marks according to their order in the class.
There is some final shuffling around of the grade boundaries to make sure that the distribution of marks across each grade is the same as last year in each subject across the whole country and students are then assigned a final grade.
From the released report, it seems the amount of matched data varies widely by subject and by college. This put students at a disadvantage where it tethered them to past results through something that is completely outside of their control.
It would be interesting to see if there is any pattern in where the matched data tends to be missing. More widely, the ruling class are adept at evading data collection. This is a principle at work wherever data collection is being used for algorithms in contemporary life. As we are increasingly surveilled, we must at least know who is avoiding the system.
In many instances, data can be used as a weapon, as in this case: data collected to assess the performance of the education system as a whole was weaponised against individual students.
The results clearly show that the algorithm penalised the most disadvantaged students most. But its real horror was the explicit acceptance that students are defined by where they are taught, rather than their individual merit in exams they are nevertheless normally forced to sit.
This is the key issue: the ridiculousness of being algorithmically assigned a mark for an exam you never took. The fact that Ofqual were assigned — and carried out — such a bizarre task tells us how comfortable the education system is with the fact that disadvantage in our society begins at birth.
By the time you come to make your first attempt to distinguish yourself, no-one expects that your individual merit, in a supposedly meritocratic society, is even a conceivable reality. The meritocrats don’t believe their own lies. The system was willing to sacrifice individual students to more accurately reproduce the status quo.
The incredible activism which forced this U-turn, particularly of Scottish students protesting at their SQA qualifications, is an inspiration. Anyone who is being pressured by hellish algorithms should take hope from the episode, whether at work, in their medical treatment, or in access to social benefits. The radical demand for Teacher Assessed Grades to be respected ought to prompt us to question the inequality of the entire exam system.
It is true in most years, as we can tell by statistics used as an analytic tool, that many working-class students have lower grades than the privately educated, and that is outrageous. When exam results are used to determine our futures, we cannot accept this inequality.
Education is a vital right, and an important asset to all children. It must no longer be used as an opportunity to uphold the lie that the rich are smarter and more capable than the poor.

A maverick’s self-inflicted snake bites could unlock breakthrough treatments – but they also reveal deeper tensions between noble scientific curiosity and cold corporate callousness, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT
Science has always been mixed up with money and power, but as a decorative facade for megayachts, it risks leaving reality behind altogether, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT

