Clicky

///

Superintelligence: Paths, Dangers, Strategies

Richard Armitage is a GP and Honorary Assistant Professor at the University of Nottingham’s Academic Unit of Population and Lifespan Sciences. He is on X: @drricharmitage

 

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’  Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1  Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2  One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6  I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence.  These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature.  In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare…

While the human brain has limitations of size (restricted by cranial dimensions), speed (neuronal action potentials propagate at 100m/s) and number (there are only as many brains as there are living humans), the architecture of potential digital minds have no upper limits to these features (consider planet-sized computers, the fact that electricity travels at 300,000,000m/s, and that many trillions of identical copies could be produced in silico).  Accordingly, computer brains might exceed the cognitive abilities of human beings by many orders of magnitude.  Bostrom considers a ‘superintelligence’ to be an entity that surpasses the cognitive performance of human beings in virtually all domains of interest, and his book examines how humanity might navigate the transition to a world where digital minds are able to out-think their biological creators.

After outlining the different kinds of superintelligence (speed, collective, and quality), Bostrom discusses the various paths by which superintelligence might be achieved, including biological cognitive enhancement, whole brain emulation, and AI.  A large proportion of the book is dedicated to the control problem: how humanity can ensure that the superintelligent entities it brings into existence would act in ways that are beneficial to humanity.  Bostrom introduces the ‘orthogonality thesis,’ which posits that the goals of a superintelligent entity could be orthogonal to the values of humans – rather than harbouring goals that are explicitly and deliberately anti-human (such as those depicted in the Terminator-style mischaracterisations of the control problem) – which, given the far superior capabilities of the superintelligence, could pose an existential threat to humanity through value misalignment.

Bostrom emphasises the importance of recognising, preparing for, and mitigating these risks before they become unmanageable, and suggests strategies for the safe development of these highly capable digital systems.

The book explores various scenarios that could arise from the advent of superintelligence, ranging from utopian outcomes where superintelligence solves humanity’s most intractable problems, to dystopian futures where humans are outpaced or even endangered by their own creations.  Bostrom emphasises the importance of recognising, preparing for, and mitigating these risks before they become unmanageable, and suggests strategies for the safe development of these highly capable digital systems.  These include establishing research priorities and policy recommendations that aim to bring about advanced AI in ways that maximise its utility for humans while minimising the risks that careless development would produce.

I found the book to offer a thorough analysis of some complex ideas with a clarity that is accessible and enthralling for non-technical readers such as myself.  While much of these ideas were highly speculative in 2014 (and when I first visited them in 2019), the recent breakthroughs in machine learning algorithms, neural networks, and computational power availability have profoundly altered the context in which I returned to them in early 2024.  Today, many of the research agendas called for by Bostrom in 2014 and underway, and both recognition of the control problem and efforts to address it are widespread.  And yet, as the capabilities of AI continue to increase apace – which include advanced systems demonstrating behaviours that were unforeseen and unintended – the importance of the orthogonality thesis becomes increasingly profound, while both the e/acc and doomer communities double-down on their positions.

A decade after its publication, Superintelligence: Paths, Dangers, Strategies constitutes even more urgent reading given the backdrop of modern AI.  The public’s growing interest in these technologies should be complimented by an appreciation of the inherent dangers of their advanced forms.  Readers of this book would come to understand the growing necessity for productive global coordination regarding AI ethics and AI governance across both public and private sectors.  The book is not sci-fi, but a warning.

Featured Book: N Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

Featured image by Christopher Burns on Unsplash

References

  1. M Wooldridge. A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going. Flatiron Books, 2021.
  2. K Roose. This A.I. Subculture’s Motto: Go, Go, Go. The New York Times 10 December 2023. https://www.nytimes.com/2023/12/10/technology/ai-acceleration.html [accessed 01 March 2024]
  3. N Bostrom and A Sandberg. Cognitive Enhancement: Methods, Ethics, Regulatory Challenges. Science and Engineering Ethics 2009; 15: 311–341. DOI: 10.1007/s11948-009-9142-5
  4. N Bostrom. Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge, 2010.
  5. N Bostrom. Are We Living in a Computer Simulation? The Philosophical Quarterly 2003; 53(211): 243–255. DOI: 10.1111/1467-9213.00309
  6. N Bostrom and MM Ćirković. Global Catastrophic Risks Hardcover. Oxford University Press, 2008.
  7. N Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
  8. T Zack, E Lehman, M Suzgun, et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. The Lancet Digital Health January 2024; 6(1): e12-e22. DOI: 10.1016/S2589-7500(23)00225-X
  9. S Harrer. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine April 2023; 90: 104512. DOI: 10.1016/j.ebiom.2023.104512
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mahfooz
Mahfooz
6 months ago

It is an important book for me.i am usually interested in like these topics and I look forward to being the first person who will have to be the first doctor whom allowed to take the opportunity to translate it to Arabic language.
Thanks

trackback

[…] Read more […]

trackback

[…] enhancing the overall security posture of the financial services industry. As discussions around superintelligence continue, the importance of collaboration will only […]

Previous Story

I’m running late (A poem of general practice)

Next Story

Why Can’t I See My GP: the past, present and future of General practice

Latest from BJGP Long Read

3
0
Would love your thoughts, please comment.x
()
x
Skip to toolbar