Idea

Stuart J. Russell: "Teachers’ work may change but we will always need them"

Capable not only of providing content but also of interacting with students, generative artificial intelligence (AI) can be an excellent aid to teachers, provided their development is controlled and supervised, explains Stuart J. Russell, professor of computer science at the University of Berkeley (United States) and co-author with Peter Norvig of the reference book Artificial Intelligence: A Modern Approach.
AI & education

Interview by Anuliina Savolainen
UNESCO 

Technology has truly made its way into the education sector in recent years, especially since the pandemic. In what way is the arrival of ChatGPT and other generative AI technologies a turning point?

During COVID-19 we found out that it’s possible to deliver education via the internet. More recently, large language models have had a huge impact on the public perception of AI – there has been a revolution since ChatGPT was launched in late 2022.

We know that learning with a human tutor can be two to three times more effective than traditional classroom learning. We have worked on AI tutoring systems for about 60 years, but until recently two problems prevented those systems from being as effective as a human tutor. Firstly, they cannot – or could not – have a conversation with the student, answer questions or develop a relationship. Another problem is that AI tutoring systems do not understand the content they teach. They might present content about chemistry, but they don't understand chemistry, which means that even if they were able to have a conversation with the student, they couldn't answer questions properly.

With the advent of large language models, both of those things have now changed to some extent. You can have a coherent conversation in quite a few languages. The systems are also rather reliable when it comes to answering questions on content. There are still weaknesses that need to be addressed, but I believe that with a reasonable amount of effort, we should be able to deliver a tutor for most subjects, at least through the end of high school.

To some extent people now have got a flavour of what it would be like to live in a world where you could just tap into an arbitrary amount of intelligence to solve any problem. However, it's a little misleading because it isn't really general intelligence we’re dealing with. There's a lot of appearance of intelligence coming from the fact that the systems use a very fluent language – but what they produce doesn’t always make sense.

This year is a turning point. There will be a huge rolling out of technology and variants of it, but we still have much more work to do. And all this pales in comparison to what will happen when artificial general intelligence (AGI) – intelligent systems whose breadth of applicability is at least comparable to the range of tasks that humans can address –  becomes available. I believe that we will be able to deliver education for every child in the world by the end of the decade.

 Getting AI tutoring systems to understand the pedagogical role is one of the major challenges 

Russel Stuart and IA at School with teachers

What will become of teachers in the face of these new developments?  

Although their job will change, teachers will still be needed. One of the current challenges is to get the AI tutoring systems to understand the specific nature of the pedagogical role: rather than always being right or having all the answers, they must help the students find the answers themselves. There are already some quite impressive demonstrations of how generic language models can be trained with examples of how to be a teacher.

Humans will still be needed to figure out how each pupil interacts with the system. Are they getting what they need? What are they failing to understand? What would be a good path for them to follow? Students must also learn to work together and to function in a social environment, for which they need adult guides. The model could be that a teacher works with eight to ten students and spends a lot of time with them individually, a bit like an intellectual guide. In this case we might actually end up with more teachers, not less. 

In the traditional education system there's underachievement at all levels. There are kids who are bored because they're much more capable. And then there are kids who don't follow and who quickly lose motivation. It's terrible that we still have children who make it all the way through the school system and remain illiterate. This is clearly a problem of the system not caring about how the individual student is doing. In addition, our educational system does not really take into account the variety of learning styles  – a good AI teaching system should be able to adapt very quickly to the individual learner. However, we’re not there yet. 

The pandemic also revealed a digital divide in the world. Why should it be any different with these latest-generation technologies?

The situation is certainly very different for economically advanced countries and countries that don’t have a real education system in place. I think that this technology will have the biggest impact in countries that currently can't afford to have a primary and secondary education system at all. Obviously, there are still children who don’t have access to phones or the internet. But I believe that this is changing relatively rapidly as tens of millions a month are gaining internet access globally. AI tutoring models also require much less bandwidth than a video call with a teacher.

To ensure worldwide reach, we would probably need either a public sector or a private sector process incentivized and facilitated by governments

The bottleneck will likely be the effort required to create customized content and tutors for each culture and language. Developing these technologies is expensive. Historically education hasn't been viewed as a particularly desirable area for the tech industry. To ensure worldwide reach, we would probably need either a public sector or a private sector process that is incentivized and facilitated by governments. Maybe a part of foreign aid could be used to create effective education systems. It would be a tragedy if this failed to happen because of greed on the part of corporations or mistrust on the part of governments – or for any other reason. 

The development of these new applications needs to be regulated, as many technology players acknowledge. Do you see this regulation of generative AI taking shape?

Many regulation initiatives are being developed around AI. In the policy world, the open letter [calling for all AI labs to pause the training of AI systems more powerful than GPT-4 signed by tech experts and published in March 2023] seemed to precipitate the policy response. UNESCO reacted right away, calling on its member states to adopt safeguards and ensure that AI is developed in accordance with ethical principles. Among others, the Chinese government, the American government, the European Union, and tech companies woke up to the need to do something.

In the area of education, evaluation is of particular concern, considered by many as a high-risk area. 

Data protection and privacy is going to become a more serious issue with AI. There should be strict rules about privacy. The data could be open to the teacher and possibly to administrators in case of any disciplinary issues or such.

Another issue is that we don't know how to prevent the AI systems from having inappropriate conversations with underage children. There must be strict limitations on the topics AI can discuss with humans. However, systems like ChatGPT operate in a black box with a trillion parameters and we don't really know how it works. Lots of people are trying to figure this out; my view is that it may not be possible.

I think regulation will force the development of better technology. Regulators must not accept as an excuse that “we don't know how to do that”. If you were a nuclear regulator and the nuclear power plant operator said that they didn't know how to stop it from exploding, you wouldn't say, “Ok, that's fine”. Instead, you would tell them that they cannot use the system until the problem is solved. Nevertheless, in the long run, I’m optimistic that we'll be able to develop technology that we do understand and can control.

“Tell me, Inge”, an immersion in the life of a Holocaust survivor

Launched in September 2023, "Tell me, Inge" is an immersive educational tool that brings the Holocaust survivor Inge Auerbacher’s experience to virtual reality (VR). Young learners are able to directly engage in a conversation with Auerbacher by asking her questions about her memories. Born in Germany in 1934, Inge Auerbacher was deported, at the age of seven, to the Theresienstadt ghetto in Czechoslovakia. She was one of its few child survivors.

Developed by technology companies Storyfile and Meta in partnership with UNESCO, the World Jewish Congress, and the Claims Conference, the experience combines conversational video artificial intelligence (AI) technology and hand-drawn 3D-animations.

By continuing to carry the voices of Holocaust survivors through cutting-edge technology, "Tell me, Inge" contributes to bringing historically accurate information about the Holocaust to broad audiences. The experience is available for free in English and German.

订阅《信使》