Idea

AI must be kept in check at school

The use of artificial intelligence in education needs to be subject to supervision and independent evaluations. Only then, argues Ben Williamson, will schools be able to maintain their mission of developing critical thinking and shaping the citizens of tomorrow.
Artificial intelligence must be kept in check at school

Ben Williamson
Senior lecturer and co-director of the Centre for Research in Digital Education at the University of Edinburgh (United Kingdom), Ben Williamson is the author of Big Data in Education: The Digital Future of Learning, Policy and Practice (2017), as well as Digitalisation of Education in the Era of Algorithms, Automation and Artificial Intelligence (forthcoming 2024).

A global experiment with artificial intelligence is currently taking place in schools. Since ChatGPT was released late 2022, followed swiftly by other “large language models”, hype and concern about AI’s possible impact on education has flooded the media. In response to “generative AI” applications arriving in schools, the Assistant Director-General of Education at UNESCO, Stefania Giannini, wrote that “The speed at which generative AI technologies are being integrated into education systems in the absence of checks, rules or regulations, is astonishing”.

Her assessment was blunt. “Education, given its function to protect as well as facilitate development and learning, has a special obligation to be finely attuned to the risks of AI – both the known risks and those only just coming into view”, Giannini wrote. “But too often we are ignoring the risks”.

In fact, little assessment exists of those risks. The education community needs much better support in understanding them – and measures put in place to better protect schools from the harms it could cause.

Mechanical teaching

Many of the risks and harms of AI have been widely reported. They include bias and discrimination as a result of training systems on historic datasets. These are serious issues that should give schools and governments good reason to question hyperbolic claims about AI. There are also more specific challenges facing education.

One of the challenges concerns the role of teachers. AI optimists often claim that it won’t replace teachers with automated instructors. The pitch is that AI will save them time, reduce workload, and take on a range of routine tasks. The risk of mechanizing teaching is that AI will demand additional forms of labour. Educators will be required to adapt their pedagogic approaches to work with automated technologies. Teachers might not be replaced by robots, but AI could robotize the role of the human teacher by doing their lesson planning, preparing materials, providing feedback to students, and assessing assignments. 

As the American writer Audrey Watters showed in the book Teaching Machines, arguments that automation can streamline teaching, “personalize” learning, and save educators time have a history stretching back a century. Mechanical teaching, Watters argued, is not informed by educational vision, but rather an industrial fantasy of super-efficient schooling.

Misleading content

Many of the most spectacular examples of AI for schools are also based on narrow views of learning. AI scientists and company executives often invoke a famous 1960s study showing that one-to-one tutoring leads to better student outcomes than whole group instruction. Its famous statistical “achievement effect” finding is cited to support the idea of individualized instruction by automated “tutorbots”. It’s also a limited view of the purpose of education as improving individuals’ measurable results. 

The views on AI and education tend to overlook the importance of fostering critical thought and engaged citizenship

Absent from such ideas about AI in education are questions about the wider purposes of education in terms of cultivating independent critical thought, personal growth, and the capacities of engaged citizenship. Mechanical instruction targeted at improving basic measures of individual learning is not suited to addressing these wider aims and values of public education.

Forms of mechanical teaching enabled by AI are not as reliable as often claimed either. Applications like ChatGPT or Google’s Bard are prone to producing factually inaccurate content. At a basic technical level they simply predict the next word in a sequence and automatically generate content in response to a user prompt. While technically impressive this can lead to the production of false or misleading content. 

The technology critic Matthew Kirschenbaum has memorably imagined a coming “textpocalypse” as the web is flooded with false information. The use of such technologies might then pollute educational materials, or at the least demand laborious and time-consuming efforts by teachers to check and correct them for accuracy. 

Paying for access

AI can be used for purposes of censoring educational content too. In one notable example, a US school district used ChatGPT to identify books to ban from the library in order to satisfy new conservative laws on educational content. Far from being neutral gateways to knowledge and understanding, generative AI can help to enforce reactionary and regressive social policies and restrict access to diverse cultural materials.

Besides these examples, the rush to embed AI in schools is driven less by explicit educational purposes and more by the visions and financial interests of the AI industry. AI technologies are extremely expensive to run, but AI in education is reckoned to be highly profitable. Schools, or even parents and students themselves, are expected to pay for access to AI applications, which is driving up the market value of education companies that have a deal with a major AI operator. 

The result is that schools or districts will end up paying for services through contracts that enable the AI provider to offset the operating costs. Ultimately, public educational funds will be extracted from schools to keep global AI companies profitable. 

At the same time, schools may become dependent on technology companies and lose their autonomy over everyday routine functions, with the result that public education becomes conditional on unaccountable private technical systems. Additionally, AI is enormously demanding of energy resources. Running AI in schools worldwide will likely contribute to further environmental degradation.  

Auditing AI in education

AI in education raises a range of critical issues for educators and system leaders to confront. Schools worldwide need informed advice and guidance on how to engage with AI based on clearly articulated educational purposes and assessments of risks. International bodies have already engaged in major efforts to shape ethical and regulatory frameworks related to AI. It’s crucial to ensure that education is equally protected. 

Schools worldwide need informed advice and guidance on how to engage with AI

Besides regulatory instruments, national bodies and officials should also consider establishing new forms of oversight for AI in education. In the United Kingdom, the Digital Futures Commission has recently proposed an educational technology certification program. It would require companies to demonstrate clear evidence of educational benefit alongside strong protections for children before they could operate in schools. 

With the arrival of AI, organizations that could undertake independent “algorithmic auditing” – evaluations of the harms that automated systems might cause – could prevent AI being dropped into schools without the necessary checks, rules or regulations. Putting such protections in place will require political will in government departments and external pressure from influential international organizations. In the face of unchecked AI expansion, independent evaluation and certification may be the best way to protect schools from becoming sites of ongoing technological experimentation.

Guidance for regulating AI in education

A minimum age limit of 13 for the use of AI in the classroom, adoption of data protection and privacy standards, and organization of specific teacher training are among the recommendations of the first-ever global Guidance on Generative AI published by UNESCO on 7 September 2023.

As generative AI systems are rapidly emerging, the Organization calls on governments to regulate their use in schools to ensure a human-centred approach to using generative AI in education. 

The guidance explains the techniques used by generative AI and their implications for education. It proposes key steps for governments to establish regulations and policy frameworks for their ethical use in education.

The publication warns that generative AI systems could worsen digital data divides and calls on policy makers to address this. Indeed, current ChatGPT models are trained on data from online users which reflect the values and dominant social norms of the Global North. 

Generative AI hit public awareness in November 2022 with the launch of ChatGPT, which became the fastest growing app in history. With the power to generate outputs such as text, images, videos, music and software codes, generative AI tools have far-reaching implications for education and research. In June 2023 UNESCO warned that its use in schools was being rolled out at too rapid a pace, with a worrying lack of checks, rules or regulations. 

The education sector is largely unprepared for the ethical and pedagogical integration of these rapidly evolving tools. A recent UNESCO global survey of over 450 schools and universities showed that less than 10 per cent of them had institutional policies and/or formal guidance concerning the use of generative AI applications, largely due to the absence of national regulations. 

订阅《信使》