Euan Lawson is the Editor of the BJGP. You can find him on Bluesky: @euanlawson.social
Let’s be honest, there’s a lot on: the enshittification of the NHS and general practice; a recent pandemic; war in Europe; devastation in Gaza; and authoritarianism on the march. All of which is just foreground to the climate change emergency which we can scarcely be said to be addressing. We may be living in interesting times but ‘interesting’ is doing some heavy lifting and, for many, encompasses an existential angst that is not the easiest to shake off. This book will not help with that.
The simple message from the authors is obvious from the title and it almost brings a blush to write it as it seems so, well, melodramatic. But here it is: a superintelligent AI is going to kill us all. Unless we do something about it. Yudkowsky and Soares lay out the case and are grimly persuasive.
Nuclear weapons were not smarter than us; they were not self-replicating, self-improving, and there was not a posse of venture-backed techno-capitalists pouring resources in with minimal governance.
Most of this comes down to what is known as the alignment problem. This is the requirement that when we build a superintelligent AI we need to make sure it isn’t hostile. This is an engineering problem on a level we have never previously encountered. If you thought building a nuclear weapon was hard then brace yourself; this is far more gnarly. Nuclear weapons were not smarter than us; they were not self-replicating, self-improving, and there was not a posse of venture-backed techno-capitalists pouring resources in with minimal governance. The simple fact is that no one yet knows how to fix alignment. And it gets worse. We can’t afford to get it wrong. Not even once. We will not get two chances as we have not got an earthly of beating a super AI once it gets running.
We have no inkling of where the threshold is for superintelligence. It could be many years away or we could be on the very brink now. And, no, we can’t just make an AI to solve the problem of alignment because you hit a recursive problem – how do you ensure that AI will be aligned? You can’t build a ‘good’ AI to police the other AIs for the same reason. Zuckerberg has stated it is his express goal to build a superintelligence. Moreover, Meta is already corralling their political forces that could suppress any AI regulation and the tech bros push on, moral compasses spinning uselessly.
We will not get two chances as we have not got an earthly of beating a super AI once it gets running.
There are other factors to consider. One of the challenges with the science of AI is its insane newness. We do not understand what is going on within those neural networks and it is more accurate, in the authors’ words, to describe AIs as grown than built. This is science in its infancy. We don’t have to stop using AI but its compelling, urgent message, is that we absolutely must pay attention to the risks of creating a superintelligent AI. That means encouraging and cajoling our policymakers to act.
The concerns raised in this book are not a bolt from the deep blue. Yudkowsky and Soares are not just a couple of cranks and they are not the only ones who are worried. The Center for AI Safety put out a statement on AI safety in May 2023 and the signatories included British-Canadian Geoffrey Hinton, Nobel Prize Winner, and ‘godfather’ of AI. Hinton quit Google so he could raise concerns about the risks of AI including the existential threat. He readily admits we don’t know that risk, after all we have never experienced this before, but if pushed might go with a “gut” feeling that there is a 10-20% chance it will result in human extinction. Yes, it is that serious.
Yudkowsky and Soares remain positive that global change to mitigate the risks is possible. They also acknowledge the difficulty of living with this existential threat. At the end of this short book, the authors quote CS Lewis who wrote in 1948 an essay, On Living In An Atomic Age, about living under the shadow of nuclear annihilation: “If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things.”
Featured book: Eliezer Yudkowsky and Nate Soares,
Featured Photo by Conny Schneider on Unsplash
There is also an excellent extended review of near-future AI concerns, “Computers that want things” in this month’s London Review of Books: https://www.lrb.co.uk/the-paper/v47/n18/james-meek/computers-that-want-things This includes a review of Yudkowsky and Soares’ book
Thanks David – I will check that out.
This is not new. Look at the video of the debate Searle and Bowden (1984). The same arguments still apply. Searle is correct – AI has no ‘understanding’ at all and theoretically can never, according to the ‘Chinese room’ thought experiment ( which is correct). So we are letting something with no understanding run our lives and direct our health policy?
Thanks Dave. One of the difficulties here is that we are all using one term ‘AI’ to mean a lot of different things. On one level it is just a glorified spellchecker and at the other end of the spectrum it is a super AI that can kill us all…
There is some attempt to categorise this in the book – though, obviously, it is largely concerned with the super AI threats. As you suggest, there are lots of other ways AI could be harmful – as well as massively beneficial – but it feels like the public conversation is limited partly by this simple lack of a grammar for AI debate.
Sorry its the video of Searle and Boden (1984)
The video is 1984 – prophetic??