Algorithmic bias and fairness in education: a (very brief) primer

Algorithmic bias and fairness in education: a (very brief) primer

8028

Carlo Perrotta

 

Algorithmic bias, understood as the tendency of computational systems to reflect the biases that contaminate the data on which they are trained, has been documented extensively. In 2015, a landmark incident saw Google’s image recognition algorithm misrecognize black faces as gorillas. Since then, other high-profile instances of bias have been registered. These include TAY, the Microsoft-developed AI bot that learned antisemitic language from Twitter interactions, or Amazon’s recruitment model which was found to be discriminatory against women

In education, algorithmic bias became visible in 2020 when Ofqual, the UK regulator of qualifications and exams, developed a ‘grades standardisation algorithm’ for exams sat by all students at ages 16 and 18 – the General Certificate of Secondary Education (GCSE) and the A Level exam. As widely reported in the mainstream media (and on this blog), the algorithm displayed ‘bias’ by downgrading the performance of state-funded schools and upgrading that of private or independent institutions. Meanwhile E-proctoring companies, thrust into the spotlight following the mass school closures of 2020 and 2021, came under intense scrutiny for their privacy shortcomings but also for their potential bias towards marginalised groups: students of colour, someone who might be wearing a face cover, or the neurodiverse.

Algorithmic bias in education, according to learning analytics

Two recent academic reviews from learning analytics researchers (here and here) provide a valuable overview of algorithmic bias and fairness in education. This is not a comprehensive summary, but a (subjective) selection of key highlights from both:

  1.  a great deal of research in educational technology suggests a ‘widening gap’ scenario where digital innovations are biased, in that they are aligned with exclusionary normative criteria that compound structural inequalities.
  2. There are two possible instances of algorithmic injustice: discriminating against individuals and groups and the reproduction of pre-existing inequalities that results from problematic training data, for example when standardized tests – notoriously correlated with socio-economic status – are used to develop predictive models.  
  3. There is a distinction between ‘traditional’ bias which refers to systematic errors in the model performance (so-called statistical bias), and a more critical understanding that emphasizes differential harms for marginalized groups. This chimes with previous attempts to develop taxonomies, like this one which distinguishes between a ‘neutral’ connotation of bias – a deviation from a statistical norm – and a moral deviation from an established or emerging ethical criterion.  In both cases, we are dealing with a categorization of error but the technical or cultural-historical framing in which this error is placed makes a world of difference.
  4. The strategies that can be deployed to minimize bias are a combination of technical solutions (collect more data, obfuscate discriminatory information, reduce representativeness gaps, curate the process of labelling data, recalibrate models by introducing counterfactual information, etc) and incentivization (create widely accessible reference datasets and agreed fairness criteria, and use funding to encourage ML researchers to take bias seriously).

Bias in the wild

As the previous list of strategies suggests, despite a growing interest in harms the academic discussion in learning analytics remains largely concerned with model performance. Biases, however, do not only poison datasets and models but exist first and foremost ‘in the wild’. Their consequences manifest and grow in the real world by amplifying pre-existing forms of injustice, as classifications and predictions are framed uncritically as simply mirroring the ‘real statistics’ found in the world.  Such manufactured objectivity leads to allocational and representational harms. The former arise when an algorithmic system allocates resources (e.g., credit) or opportunities (e.g., jobs) unfairly to different social groups; the latter when a system (e.g., a search engine) represents some social groups in a less favourable light than others, demeans them, or fails to recognize their existence altogether.

Another important subcategory of ‘bias in the wild’ is automation bias – the assumption here is that when automated systems begin to influence human decision making in the real world, problems may arise as human agents delegate agency to the automated function, trusting it to be superior to their own performance. Misplaced trust then leads to errors with multiple harmful consequences: humans omit to act or react in response to an automated decision, or they passively follow the system’s instruction trusting it over their own judgement.  In both cases, there are empirical and design factors at play. One may have been selectively exposed to instances of accurate functioning of an algorithmic system, where false positives and negatives have been concealed; or perhaps one has been instructed by a higher authority that the system is more accurate than a human; or again it may simply be a consequence of poor design which undermines the system at a fundamental level. In all cases, harm arises as the human agent is increasingly unequipped to exercise judgement or even to recognise a problem beyond the purview of the automated system. 

The limits of fairness by design 

Attempts to reduce or even avoid the harmful consequences of bias have led to a vibrant field of research: algorithmic fairness or Fair Machine Learning (ML).  As noted in a previous post about colour-blind algorithms, and as compellingly articulated in recent research, these efforts are important but run a risk of missing the bigger picture, as technical interventions that neglect the need for more structural and political reform can have the opposite effect of crystallising inequalities through a solutionist rhetoric that only scratches the surface of complex, multidimensional issues. 

As noted by Anna Lauren Hoffman, bias and its harms cannot and should not be reduced to discrete ‘bad data sources’ or ‘bad mechanisms’. This would amount to an arbitrary bracketing which will lead inevitably to a dismissal: bias and its attendant sources of algorithmic trouble become issues that ‘sneak’ inside models and, it (fallaciously) follows, can be expunged by technical means.  Instead, attempts to address algorithmic bias and harms should come to terms with the multiple intersecting forms of historical privilege and cultural capital that make the pursuit of purely technical ‘fixes’ a doomed exercise.  It is essential to account for these factors when developing ‘fair’ ML models, even if they cannot be readily quantified or formalized mathematically.