Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

UK’s Failed Attempt to Grade Students by an Algorithm
Artificial Intelligence

UK’s Failed Attempt to Grade Students by an Algorithm

Last Updated on September 4, 2020 by Editorial Team

Author(s): Yannique Hecht

Source: JESHOOTScom on Pixabay

Artificial Intelligence

Why engineering alone isn’t enough to fix broken social systems.

After Covid-19 impeded schools from operating and examining regularly, the UK Department of Education attempted to grade students’ A-level and GCSE exams with a third-party machine learning algorithm. Britain’s A-levels largely determine students’ chances to attend higher education and thus have life-long consequences. The applied algorithm predicted students’ grades based on their individual performance in earlier — and somewhat irrelevant and deviating — mock exams as well as on their school’s relative performance to others in the previous year.

Many critics labeled this approach as inaccurate and unfair, resulting in significant downgrading and favoring private schools. In fact, over 40% of students received lower grades than predicted by their teachers, compared to only 2% whose scores improved (Heaven, 2020). Moreover, the majority of ‘downgraded’ students were from primarily poor, non-white communities. After a public backlash, the government was forced to abandon its plans just two days before the final release of grades.

The Ofqual Algorithm

As per Section 8 of Ofqual’s technical report (p83), the algorithm was designed to:

  1. Looks at historic grades in the subject at the school
  2. Understands how prior attainment maps to final results across England
  3. Predicts the achievement of previous students based on this mapping
  4. Predicts the achievement of current students in the same way
  5. Works out the proportion of students that can be matched to their prior attainment
  6. Creates a target set of grades
  7. Assigns rough grades to students based on their rank
  8. Assigns marks to students based on their rough grade
  9. Works out national grade boundaries and final grades

For more details, check out Jeni Tennison’s technical walkthrough here.

Differentiating Between Engineering & Social Problems

This contended grading model is not only the latest episode in a series of overenthusiastic applications of scientific management in the British public sector (Bagehot, 2020) but highlights AI’s wide-ranging social, technological, economic, political, legal, and ethical implications. In this case, besides the demographics, social mobility, inequality, and bias, the interplay of engineering and social problems deserves particular attention.

Engineering Problems

On the engineering side, two questions remain.

First, why to implement pre-maturely and nation-widely in a domain with life-long consequences? Applying algorithms that are only marginally better than subjective human evaluations are still an improvement and produce a social net benefit. However, developing and scaling the technology through incremental trial & error seems more sensible.

Second, why to completely replace grading and examination instead of focusing on augmenting and scaling teachers’ grading capabilities? Opportunities for human input or override might have improved both results and stakeholder acceptance.

Social Problems

With regard to social problems, inequality, in this case, represented by the difference between public and private schools, cannot be solely addressed with an algorithm (Hao, 2020). Algorithms are vulnerable to inherit the flaws of the system they are designed to fix. Thus, if not managed proactively and effectively, can give rise to self-fulfilling prophecies. Public awareness, scrutiny, and transparency are critical first steps to eliminate bias but far from a guarantee.

The UK’s grading debacle shows that…

If you don’t confront the social issues involved, no amount of technology is going to improve a situation. We can’t solve social problems with engineering solutions.

— Tse, Esposito, Goh, 2019

This principle holds true well beyond simple grading and extends to all domains where individuals are involved and where we apply artificial intelligence to cluster, classify, or predict, such as in law enforcement, immigration policies, recruiting admissions, or performance measurement.

After all, algorithms alone can’t fix broken social systems.

About the author:
Yannique Hecht works in the fields of combining strategy, customer insights, data, and innovation. While his career has been in the aviation, travel, finance, and technology industry, he is passionate about management. Yannique specializes in developing strategies for commercializing AI & machine learning products.

Follow me on Medium or LinkedIn.

References:


UK’s Failed Attempt to Grade Students by an Algorithm was originally published in Towards AI — Multidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓