UK’s Failed Attempt to Grade Students by an Algorithm
Last Updated on September 4, 2020 by Editorial Team
Author(s): Yannique Hecht
Artificial Intelligence
Why engineering alone isn’t enough to fix broken social systems.
After Covid-19 impeded schools from operating and examining regularly, the UK Department of Education attempted to grade students’ A-level and GCSE exams with a third-party machine learning algorithm. Britain’s A-levels largely determine students’ chances to attend higher education and thus have life-long consequences. The applied algorithm predicted students’ grades based on their individual performance in earlier — and somewhat irrelevant and deviating — mock exams as well as on their school’s relative performance to others in the previous year.
Many critics labeled this approach as inaccurate and unfair, resulting in significant downgrading and favoring private schools. In fact, over 40% of students received lower grades than predicted by their teachers, compared to only 2% whose scores improved (Heaven, 2020). Moreover, the majority of ‘downgraded’ students were from primarily poor, non-white communities. After a public backlash, the government was forced to abandon its plans just two days before the final release of grades.
The Ofqual Algorithm
As per Section 8 of Ofqual’s technical report (p83), the algorithm was designed to:
- Looks at historic grades in the subject at the school
- Understands how prior attainment maps to final results across England
- Predicts the achievement of previous students based on this mapping
- Predicts the achievement of current students in the same way
- Works out the proportion of students that can be matched to their prior attainment
- Creates a target set of grades
- Assigns rough grades to students based on their rank
- Assigns marks to students based on their rough grade
- Works out national grade boundaries and final grades
For more details, check out Jeni Tennison’s technical walkthrough here.
Differentiating Between Engineering & Social Problems
This contended grading model is not only the latest episode in a series of overenthusiastic applications of scientific management in the British public sector (Bagehot, 2020) but highlights AI’s wide-ranging social, technological, economic, political, legal, and ethical implications. In this case, besides the demographics, social mobility, inequality, and bias, the interplay of engineering and social problems deserves particular attention.
Engineering Problems
On the engineering side, two questions remain.
First, why to implement pre-maturely and nation-widely in a domain with life-long consequences? Applying algorithms that are only marginally better than subjective human evaluations are still an improvement and produce a social net benefit. However, developing and scaling the technology through incremental trial & error seems more sensible.
Second, why to completely replace grading and examination instead of focusing on augmenting and scaling teachers’ grading capabilities? Opportunities for human input or override might have improved both results and stakeholder acceptance.
Social Problems
With regard to social problems, inequality, in this case, represented by the difference between public and private schools, cannot be solely addressed with an algorithm (Hao, 2020). Algorithms are vulnerable to inherit the flaws of the system they are designed to fix. Thus, if not managed proactively and effectively, can give rise to self-fulfilling prophecies. Public awareness, scrutiny, and transparency are critical first steps to eliminate bias but far from a guarantee.
The UK’s grading debacle shows that…
If you don’t confront the social issues involved, no amount of technology is going to improve a situation. We can’t solve social problems with engineering solutions.
— Tse, Esposito, Goh, 2019
This principle holds true well beyond simple grading and extends to all domains where individuals are involved and where we apply artificial intelligence to cluster, classify, or predict, such as in law enforcement, immigration policies, recruiting admissions, or performance measurement.
After all, algorithms alone can’t fix broken social systems.
About the author:
Yannique Hecht works in the fields of combining strategy, customer insights, data, and innovation. While his career has been in the aviation, travel, finance, and technology industry, he is passionate about management. Yannique specializes in developing strategies for commercializing AI & machine learning products.
Follow me on Medium or LinkedIn.
References:
- Bagehot, W. (2020, August 20). How the British government rules by algorithm. The Economist. Retrieved September 01, 2020, from https://www.economist.com/britain/2020/08/20/howthe-british-government-rules-by-algorithm
- Hao, K. (2020, August 21). The UK exam debacle reminds us that algorithms can’t fix broken systems. MIT Tech Review. Retrieved September 01, 2020, from https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fixbroken-system/
- Heaven, W. D. (2020, August 05). The UK is dropping an immigration algorithm that critics say is racist. MIT Tech Review. Retrieved September 01, 2020, from https://www.technologyreview.com/2020/08/05/1006034/the-uk-is-dropping-animmigration-algorithm-that-critics-say-is-racist/
- Tennison, J. (2020, August 16). How does Ofqual’s grading algorithm work? RPubs. Retrieved September 01, 2020, from https://rpubs.com/JeniT/ofqual-algorithm
- Tse, T. C., Esposito, M., & Goh, D. (2019). The AI Republic: Building the nexus between humans and intelligent automation. S.l.: Lioncrest Publishing.
UK’s Failed Attempt to Grade Students by an Algorithm was originally published in Towards AI — Multidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI