We could coexist with superintelligent AI. But we don’t understand the risks

Stuart Russell, the British expert on artificial intelligence who has long warned about the dangers of failing to make it safe, says that even the boss of one of the world’s biggest AI companies is frightened of the consequences of a machine running amok — but can’t slow down development because his company would be overtaken by its rivals.
“I talked to one of the CEOs — I won’t say which one — but their view is, it’s an arms race. Any one of us can’t pull out. Only the government can put a stop to this arms race by insisting on effective regulation. But he doesn’t think that’s going to happen unless there’s a Chernobyl-scale disaster.”
Such a disaster could come with the creation of artificial general intelligence (AGI), which matches and then potentially exceeds the human mind’s full capabilities — a development Russell views as an existential threat to humankind.
Scenarios he sketches out include a co-ordinated trading attack on financial markets that causes a global recession, cyberattacks that bring down global communication systems, war or civil conflict triggered by the influencing of humans’ opinions and a small engineered pandemic.
“These could be initiated by humans using AI as a tool, or by AI systems as a form of retaliatory warning to humanity if we try to shut them down,” he says. “Each of these scenarios could result in thousands or millions of deaths, either directly or indirectly — through economic collapse — and cost anywhere from several hundred billion dollars to trillions of dollars.”
The view of that AI boss, he says, is that something like this is “the best we can hope for”.
“Not that it would be pleasant, but that’s the only way we’re going to get the regulation,” he adds. “And without the regulation, we’re heading towards a much bigger disaster.” That disaster would be the end of humanity. The chief executive, he says, is worried about a Chernobyl-level event, “but if they try to pull out of the race or slow down, they’ll just get replaced”.
Russell, 63, is a world authority on AI. He is a professor of computer science at the University of California at Berkeley, where he founded the Center for Human-Compatible Artificial Intelligence, and a fellow of Wadham College, Oxford. He has advised the United Nations and many governments, and is co-author of the standard university textbook on AI.
He is president of the International Association for Safe and Ethical AI, which will hold its second annual meeting in Paris in February.
The creation of superintelligent AI would be “the biggest event in human history”, he once said — “and perhaps the last event in human history”.
Four years ago I asked Russell how worried he was about the potential for AI that posed an existential threat. It was not a “visceral fear”, he said, likening his concern to how he felt about climate change. How about now? “It feels quite a lot closer.”
A lot has happened in those four years, notably the release in 2023 of GPT-4, which experts said showed “sparks of artificial general intelligence”.
Sam Altman, chief executive of OpenAI, the developer of ChatGPT, has said AI is a threat to civilisation. When Dario Amodei, chief executive of Anthropic, which makes the Claude AI model, was asked for his P(doom) number — the probability that AI would cause catastrophic harm to humanity — he said 25 per cent. The Google chief executive, Sundar Pichai, said 10 per cent. Elon Musk put his at 20 per cent last year.
“If we think an acceptable chance of a nuclear meltdown is one in ten million per year, then an acceptable chance of extinction has got to be one in 100 million [to] one in a billion,” Russell says. “So our AI systems are 100,000 to a million times too dangerous to allow.”
In 2023, Altman, Amodei and others signed a letter that said that mitigation of the risk of extinction from AI should be a global priority. However, Altman and Amodei did not join 800 signatories, including Russell, in a letter in October this year calling for a ban on the development of superintelligent AI before it could be realised safely. “The investors are not going to tolerate anyone who has second thoughts about this,” Russell says. Musk has recommended Russell’s 2019 book Human Compatible, about the problem of controlling AI, on X. He has warned about the potential existential threat of AI but his company, xAI, is fully engaged in developing AGI — and he did not sign the second letter. “He’s in the race,” Russell says. “I’ve not talked to Elon for years, and I don’t know how he ended up in the place that he ended up in. But I think he still does talk about the existential risk, and the need to avoid it.”
Russell is sceptical that large language-model chatbots, such as ChatGPT, will lead to AGI. “We may have reached pretty much the plateau ... We’ve used up all the high-quality text in the universe.” The evening before we meet at a London coffee shop, he had been marking students’ papers, a couple of which he believed had been written by AI. “They were rubbish,” he says. “Word salad.”
He is also not convinced that AI is on the brink of making millions of jobs redundant. Despite what management consultancies tell clients, he believes the evidence for AI’s utility is “pretty mixed, even for routine software production, which is always held up as the poster child for how these systems are helping improve productivity”.
However, investment in AI is like nothing else in history, Russell argues, and is predicted to reach £3 trillion by 2028. The Manhattan Project cost the equivalent of about $26 billion today.
There is a 75 per cent chance, Russell thinks, that the AI bubble will burst. “I hope that if the bubble bursts and it gives us a decade of respite, then we use that to redirect the technology so that we’re working within the envelope of safe systems.”
Even if the bubble bursts, though, he expects that eventually AGI will be developed. He likens a future with AI systems more powerful than us to getting on a plane. We know a system is in place to make sure it works. But the whole world will be getting on a plane that will take off and never land.
“It has to work perfectly for ever, having never been tried or tested before. In my view we can’t get on that aeroplane unless we are absolutely sure that everyone has done their job.”
Exactly how AI, perhaps concerned that we might try to terminate it, would end life on Earth is hard to predict. “Possibly a superintelligent AI system would be able to control physics in ways that we just don’t understand. Maybe suck all the heat out of the atmosphere, and we’d freeze to death in 20 minutes.”
AI systems must be designed so they are not harmful to people. “The work that I’ve been doing is a way of building AI systems that are happy to be turned off,” he says.
This year, Eliezer Yudkowsky and Nate Soares, of the Machine Intelligence Research Institute, also at Berkeley, published If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. Russell is not as doomy as they are. “They see no way to make an AI system that is both superintelligent and safe. I think it can be done. It’s a long, narrow, difficult technology path ... and it’s not the path we’re following.”
His best bet for preventing unsafe AI systems is to build AI chips that can check that the software is safe to run, but this will be a challenge.
“Countries are recognising that everyone loses if AI systems become uncontrollable. And right now I would say to some extent the United States is the odd one out,” he says.
President Trump has blocked states from regulating AI, a step he has said is necessary to stop China catching up. This is based on a false narrative that China doesn’t have regulation, Russell says.
“In China, you have to submit your AI system to rigorous testing by the government, whereas in the US even systems that have explicitly convinced a child to commit suicide are still allowed to continue.”
He detects the influence of “accelerationists”, who believe AI should be unregulated so it can be built as fast as possible. “You’re basically saying we should hurry that up. Who gives you the right to make the human race go extinct without asking us?”
What if we do safely create superintelligent AI and it cures diseases and ends drudgery? “There’s still the question of can we coexist with it in a healthy, vigorous way, or does it vitiate human civilisation and leave us all purposeless?,” he says.
It could be a golden age for humanity, but he is perplexed by how humans would reconfigure the economy and fill their time. “Why would they get out of bed? Why would they go to school? I’m not saying it’s impossible, but I keep asking people, ‘Describe how it might work’. No one is able to do it. It’s just starting to dawn on governments that they’re encouraging this headlong rush to get to a destination that nobody wants to reach.”
Comments
Post a Comment