FOLLOWING last week’s car crash handling of this year’s A-level results, this week saw the government belatedly see sense with its latest screeching handbrake turn, before GCSE students also found themselves downgraded by Ofqual’s offish, questionable algorithm.

Not getting the exam grades you had been expecting is a horrible experience. I still remember the day, when I went into school to get my A-Level results to discover my entire English class had been given lower grades than predicted. In this case, it boiled down to an argument between the school and the exam board over how a revised curriculum should have been taught. We, the students, were the collateral damage and it felt terribly unfair. I was fortunate that my downgrading didn’t affect my university place, but other friends lost out.

So my sympathy goes out to those students who didn’t get the grades they deserved, then did, only to discover that their university place had already been given away. If there is a silver lining for these students, it is that next year, when the present pandemic is (hopefully) finally over, may be a better time to go to university.

As algorithms go, Ofqual’s was rather a crude one. Ignoring the knowledge of teachers and their suggested grades, it weighted instead towards past performance of the student’s school. So if someone last year got a U in a subject, then someone in this year’s cohort was going to get a U this year, irrespective of whether they deserved it. The model chosen marked down students in more disadvantaged areas and helped those in more affluent ones. It also disproportionately marked down larger cohorts, such as sixth form colleges, but boosted those with schools and subjects that boasted smaller class sizes.

The exam fiasco is just the latest example of where algorithms can get things horribly wrong. Over recent years, algorithms have played an increasingly influential role in the way the modern world functions. From job recruitment to credit card applications, computers are filtering out and rejecting people according to often questionable sets of presumptions. Frequently, these algorithms boil down to a trade-off between efficiency and fairness. Frequently, too, algorithms reflect the people who wrote their rules, with their prejudices and bias baked into the systems.

The truth is even the most sophisticated algorithms are often flawed. Look at the modelling behind the financial crisis of 2008. Or the failure of pollsters to foresee the 2016 Brexit Referendum. People are often odder, more unpredictable and more surprising than algorithms allow. When it comes to students, their teachers know that all too well. Which is why, from the off, their professional judgment should have been trusted over a wonky computer programme.