In light of the recent hysteria around Chat-GPT, Education International member organisations might well groan at having to read yet another post about Artificial Intelligence and education. However, unfortunately AI is not a topic that educators can afford to completely tune out from. Indeed, there are a lot of people wanting us to surrender to the hype and accept that we have all now entered the ‘AI age’ … that teachers and students simply need to accept it and make the best of the AI being handed down to us. As such, one of the main reasons that ongoing debates around AI have become so boring and repetitive is the seemingly inescapable nature of the situation. Regardless of how optimistic or pessimistic the conversations around AI are, the underlying presumption is that ‘There Is No Alternative’.
In contrast, EI member organisations hopefully remain suspicious of being told to put up and shut up. Indeed, there are many powerful voices working hard to keep us passively resigned to the changes currently being ushered in under the aegis of ‘AI’ – not least the likes of Google, Open AI, OECD and others who stand to gain most from this technology. Rather than give in to these vested interests, the education community needs to step up and work out ways of pushing back against the current received wisdoms around AI and education.
So where to start with thinking against the current forms of AI currently being so relentlessly sold to us? This blog piece presents a range of persuasive critiques of AI that are beginning to emerge from those who stand to lose most (and gain least) from this technology – Black, dis/abled and queer populations, those in the global south, Indigenous communities, eco-activists, anti-fascists, and other marginalised, disadvantaged and subaltern groups. Any educator concerned about the future of AI and education can therefore take heart of this growing counter-commentary. Here, then, are a few alternate perspectives on what AI is … and what AI might be.
Ways of thinking differently about AI
Black, Crip & queer perspectives on AI
Some of the most powerful critiques of AI are coming from traditionally minoritized groups – not least Black critics calling out racially-related misuses of the technology across the US and beyond. These range from well-publicised cases of facial recognition driving racist policing practices, through to systematic racial discrimination perpetuated by algorithms deployed to allocate welfare payments, college admissions, and mortgage loans.
Condemnation is growing around the double-edged nature of such AI-driven discriminations. Not only are these AI technologies being initially trained on data-sets that reflect historical biases and discriminations against Black populations, but they are then being deployed in institutions and settings that are structurally racist. All of this results in what Ruha Benjamin (2019) terms ‘engineered inequality’ – i.e. the tendency for AI technologies to result in inevitably oppressive and disadvantaging outcomes “given their design in a society structured by interlocking forms of domination” (Benjamin 2019, p.47).
Similar concerns are raised by critiques of AI within dis/abled and queer communities. As scholar-activists such as Ashley Shew argue, there is a distinct air of ‘techno-ablism’ to the ways in which AI is currently being developed. Features such as eye-tracking, voice recognition and gait analysis all work against people who do not conform to expected physical features and/or ways of thinking and acting. Shew points to a distinct lack of interest amongst AI developers in designing their products around disabled people’s experiences with technology and disability. At best, AI is developed to somehow ‘assist’ disabled people to fit better into able-bodied and neuro-typical contexts – framing disability as an individual problem that AI can somehow help overcome.
Such perspectives on AI should certainly make educators think twice about any claims for AI as a force for making education fairer. Indeed, it is highly unlikely that AI systems implemented in already unequal education contexts will somehow lead to radically different empowering or emancipatory outcomes for minoritized students and staff. Instead, it is most likely that even the most well-intentioned AI leads to amplifications and intensifications of existing discriminatory tendencies and outcomes.
Feminist approaches to AI
Such concerns are echoed in feminist critiques of AI. These stretch back decades to writers such as Alison Adam in the 1990s highlighting how AI is founded on deeply problematic understandings of intelligence, and profound insensitivities toward social and cultural aspects of thinking, acting, and living. Since then, feminists have continued to call out AI developers and the technologies they produce as lacking any genuine concern for core human attributes such as empathy, ethics, solidarity, and care for others and the environment.
In raising these issues, feminist critics highlight how many of the problems associated with current uses of AI relate back to how power and privilege operate in modern capitalist conditions. For example, feminist activists were quick to protest against the reliance of AI development on low-paid and unpaid ‘invisible labour’ performed by women, people of colour, and often outsourced to non-Western workers. Feminist thinking reminds us that these injustices cannot be simply avoided, neutralised, or fixed. Instead, these are issues that need to be resisted, challenged and worked around in ways that rebalance the outcomes of AI tools along more equitable lines.
All of this leads to calls for the development of new forms of AI that are informed by feminist principles and can be used for feminist ends. Examples include projects where local communities take time to create their own data-sets to then train AI models on. This means that the functioning, intentions and parameters of the eventual AI tool are visible to everyone involved in its development and use – in contrast to the deliberate ‘black box’ opaqueness of most commercial AI. Other feminist forms of AI are being developed to deliberately combat the discriminatory and misogynist forms of AI that currently predominate – such as alternate forms of predictive AI that alert law enforcement to crimes such as gender-based violence and femicide. As Sophie Toupin concludes, “The promise associated with feminist AI is that a fairer, slower, consensual, collaborative AI is possible”.
Indigenous perspectives on AI
Allied to this is growing interest in reconceptualising AI through the lens of Indigenous epistemologies, cosmologies, and ways of being and doing. One initial attempt to do so is offered by Luke Munn’s recent article ‘Designing and evaluating AI according to indigenous Māori principles’, which applies the work of anthropologist, historian, and noted Māori leader Sir Hirini Moko Mead to current Western framings of AI technologies as they are starting to be applied across various societal domains.
As Munn explains, these Māori principles, values and understandings offer a distinct break with the current dominant assumptions around AI as promoted by Western IT industry and policy interests. For example, Indigenous framings of AI raise concerns around human dignity, collective interests and communal integrity, as well as contextualising impacts according to local norms. Crucially, these approaches also foreground the ways in which AI is entwined materially with natural environments – from the imposition of water-hungry data centres in drought-ridden regions through to problems of e-waste and the exploitative depletion of rare metals and minerals to construct computer hardware.
From an Indigenous standpoint, therefore, the current Western push for AI appears dangerously unbalanced and removed from the needs of people and land. When set against the Indigenous framings outlined in Munn’s paper, current dominant IT industry rhetoric such as the complete AI-led ‘transformation’ of society, and extreme visions of an omnipotent ‘general artificial intelligence’ appears decidedly arrogant, hubristic, disrespectful, and destructive.
Recurring issues and concerns
These are just a few aspects of a fast-growing counter-commentary on what AI is, and what AI can be. Indeed, a variety of alternate standpoints and perspectives are now being brought to bear on AI. Alongside growing calls to rethink AI along decolonialist and eco-justice lines, another emerging set of arguments against the politics of current AI draw attention to the clear “resonances between fascistic politics and AI’s base operations” ( McQuillan 2022, p. 97).
While all these ideas and agendas offer very different – and sometimes contradictory - takes on AI, they do contain some common sensibilities and ambitions. For example, these critiques are usually not afraid to make radical demands. One central conclusion from many of these viewpoints is that specific forms of AI should simply not be developed and/or should be immediately discontinued and outlawed. For example, there are persistent arguments for the complete banning of facial recognition technology – or, at the very least, tight control and regulation over its use similar to controlled substances such as plutonium. As noted legal-activist Albert Fox Cahn has reasoned: “Facial recognition is biased, broken, and antithetical to democracy. … Banning facial recognition won’t just protect civil rights: it’s a matter of life and death”.
Elsewhere, are common concerns over placing marginalised and alternative perspectives at the front and centre of future AI design. In the short term, it is argued that future design of AI technologies and tools should be built around the needs of those least likely to benefit from the technology (what designers sometimes refer to as ‘edge cases’). Instead of being an afterthought, the experiences of Black, disabled and/or Indigenous communities should guide the decisions of AI designers and developers. This is reflected in calls for disability-led design, feminist AI design, Indigenous AI design guidelines, and design justice approaches to conceptualising AI.
In the long-term are calls for these principles (and others like them) to be mandated as a basis from which to advance the sustained fundamental reform of AI along anti-discriminatory, genuinely inclusive and decolonised lines – forcing IT industry, policymakers and other drivers of AI to ground their actions and ambitions around larger questions of justice, inequality, and coloniality. This would require the AI industry to give up their current preoccupations with technological speed, scale, novelty, and wilful disruption. Instead, this promotes an approach to AI that is “slower, more considered, and more considerate of life in its various forms” (Munn 2023, p.70).
So where now? Reimagining what we want educational ‘AI’ to be
Rather than being set in stone, there are plenty of reasons to believe that the ongoing AI-ification of education is something that can be resisted, and perhaps even reimagined in radically different ways. All the different perspectives just outlined should inspire us to slow down and recalibrate current discussions around AI and education – reflecting on what these technologies cannot do, and calling out what is lost and what harms occur when these technologies are used. These are certainly not unreasonable requests. Indeed, it is telling how we have quickly descended to the point where calls to consider issues relating to social inequality, humanity and the environment somehow appear to be so radical and totally unachievable.
We are still at a moment when there is time to speak out against the harmful forms of AI currently being pushed so relentlessly. Seen in this light, then, it seems crucial that the education community makes concerted efforts to push such values, ideals and principles into debates and decision-making around what forms of AI we collectively want to see in education. The critiques outlined in this post from Black, feminist and Indigenous perspectives suggest that the future of AI and education does not have to be a foregone conclusion that we simply need to adapt to. Instead, the incursion of AI into education is definitely something that can be resisted and reimagined.
The opinions expressed in this blog are those of the author and do not necessarily reflect any official policies or positions of Education International.