Artificial Intelligence: the future is dark?
An extract from 2084: Artificial Intelligence and The Future of Humanity, the new book by Professor John Lennox
Artificial Intelligence is the future not only for Russia but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.
Environmental expert James Lovelock, who developed the Gaia hypothesis – the idea that the earth is a self-regulating ecosystem – suggests, in his usual provocative way, that humans may have had their time and should make way for something new. In an interview reported in The Guardian, he says: “Because quite soon – before we’ve reached the end of this century, even – I think that what people call robots will have taken over.” (1)
In April 2018 at the TED talks in Vancouver, physicist and cosmologist Max Tegmark, president of the Future of Life Institute at MIT, made this rather grandiose statement: “In creating AI, we’re birthing a new form of life with unlimited potential for good or ill.” (2) How much science lies behind this statement is another matter since, to date, all AI and machine learning algorithms are, to quote the neat phrase of Rosalind Picard: “no more alive than Microsoft Word.”
A study by Sir Nigel Shadbolt and Roger Hampson entitled The Digital Ape carries the subtitle How to Live (in Peace) with Smart Machines. (3) They are optimistic that humans will still be in charge, provided we approach the process sensibly. But is this optimism justified? The director of Cambridge University’s Centre for the Study of Existential Risk said: “We live in a world that could become fraught with . . . hazards from the misuse of AI and we need to take ownership of the problem – because the risks are real.” (4)
The ethical questions are urgent since AI is regarded by experts as a transformative technology in the same league as electricity. The United States and China are determined to dominate the field, and China expects to win by 2030. President Emmanuel Macron wants to make France the AI capital of the world.
It would, however, make more sense to compare AI with nuclear energy than with electricity. Research into nuclear energy led to nuclear power stations, but it also led to a nuclear arms race that almost led the world to the brink of extinction. AI creates problems of similar, or of even greater, magnitude. The brilliant play Copenhagen by Michael Frayn explores the question of whether scientists should simply follow the mathematics and physics without regard to the consequences of what they are developing or whether they should have moral qualms about it.(5)
The context of the play is the research that led to nuclear fission. Exactly the same issues are raised by AI, except that AI is accessible by many more people than atomic physics and does not need very sophisticated and expensive facilities. You cannot build a nuclear bomb in your bedroom, but you can hack your way around the world and cause substantial damage.
We need to stop and ask: What is the truth behind claims like those of Lovelock and Tegmark? Are they perhaps exaggerated speculation that goes far beyond what scientific research has actually shown? There may well be some validity in the observation that the amount of unjustified speculation claimed for AI is in inverse proportion to the amount of actual hands-on work in AI that the claimant has done. For it would seem that those scientists who actually build AI systems tend to be more cautious in their predictions about the potential of AI than those who do not.
There is also the question of what worldview is driving all of this. What are the assumptions that are being made? Are they in the interests of all of us or simply of an elite few who wish to dominate for their own purposes? The answers given to these questions will depend on the worldview of the participants in AI research, application, and debate who are supplying them. Of particular interest is their view of the nature of ultimate reality. Physicist Sir John Polkinghorne, who once taught me Quantum Mechanics at Cambridge, writes: “If we are to understand the nature of reality, we have only two possible starting points: either the brute fact of the physical world or the brute fact of a divine will and purpose behind that physical world.” (6)
This is an excerpt taken from 2084: Artificial Intelligence and The Future of Humanity by Professor John C. Lennox, copyright © 2020 by Zondervan. Used by permission of Zondervan www.zondervan.com
John C. Lennox (PhD, DPhil, DSc) is Professor of Mathematics in the University of Oxford (Emeritus), Fellow in Mathematics and the Philosophy of Science, and Pastoral Advisor at Green Templeton College, Oxford. He is author of God's Undertaker: Has Science Buried God? on the interface between science, philosophy, and theology.
He lectures extensively in North America and in Eastern and Western Europe on mathematics, the philosophy of science, and the intellectual defence of Christianity, and he has publicly debated New Atheists Richard Dawkins and Christopher Hitchens. John is married to Sally; they have three grown children and ten grandchildren and live near Oxford
Quoted in Decca Aitkenhead, “James Lovelock: ‘Before the End of This Century, Robots Will Have Taken Over,’” The Guardian, 30 September 2016
Quoted in Matt Ridley, “Britain Can Show the World the Best of AI,” The Times, 16 April 2018,
Nigel Shadbolt and Roger Hampson, The Digital Ape: How to Live (in Peace) with Smart Machines (Oxford: Oxford University Press, 2019).
Quoted in Jane Wakefield, “AI Ripe for Exploitation, Experts Warn,” BBC News, 21 February 2018, www.bbc.com/news/technology-43127533.
Michael Frayn, Copenhagen (New York: Bloomsbury, 2017).
John Polkinghorne, Serious Talk: Science and Religion in Dialogue (Harrisburg,PA: Trinity, 1995), 3.