AI and what it means for humanity

2084 book cover

We hear a lot about what artificial intelligence means for business and research, how it can speed up and streamline tedious processes and so on.

But if machine intelligence is going to be our new normal how does that affect what it means to be human? Emeritus professor of mathematics at Oxford University, John C. Lennox has written a new book exploring these questions. In this exclusive extract he looks at how our brains compare to computers.

The book, 2084: Artificial Intelligence, The Future of Humanity and The God Question is available now.

ARE BRAINS COMPUTERS?

The main worldview that is behind much writing about the future of humanity is the first of these -- atheism. It is expressed by physicist Sean Carroll in his current bestseller The Big Picture: "We humans are blobs of organized mud, which through the impersonal workings of nature’s patterns have developed the capacity to contemplate and cherish and engage with the intimidating complexity of the world around us... The meaning we find in life is not transcendent."(7) Such reductionist physicalism holds that human cognitive abilities have emerged naturally from the biosphere and therefore sees no reason why the same kind of thing can’t happen again, once a high enough level of organization is reached -- that is, life emerging from the silicon sphere. Nick Bostrom puts it this way: "We know that blind evolutionary processes can produce human-level general intelligence, since they have already done so at least once. Evolutionary processes with foresight -- that is, genetic programs designed and guided by an intelligent human programmer -- should be able to achieve a similar outcome with far greater efficiency."(8)

The claim that Bostrom makes in the first sentence here is wide open to challenge, but this is not the place to challenge it.(9) What I am concerned about here is rather the impression that is so easily given by statements like those of Bostrom that the human brain is no more than a computer. It is one thing to say that the brain functions in certain ways like a computer. It is an entirely different thing to say that it is nothing but a computer. Simulation is not duplication.

We mentioned earlier that the mathematical genius Alan Turing tried to characterize artificial intelligence in machine terms; an artificial system that could pass as human must be considered as intelligent. For Turing, the test that we now call the Turing Test was limited because of technology. But for the sake of argument, suppose we waive that objection. Suppose we could construct robots that were physically indistinguishable from humans, as in many sci-fi movies, and cognitively at least capable of fooling us. Would that make them actually "intelligent"? I think that it would not. What convinces me of that is the famous Chinese Room experiment -- a thought experiment invented by the Berkeley philosopher John Searle. Here is his explanation of it:

The argument proceeds by the following thought experiment. Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.(10)

We must not mistake a computer simulation for the real thing. After all, no one would mistake a computer simulation of the weather for the weather. We should therefore not get confused over simulation of minds.

Distinguished Oxford mathematician Roger Penrose goes even further in arguing that the brain must be more than a computer, since it can do things no computer can even do in theory. Therefore, in his view, no computer can ever simulate the mind. Of course, if intelligence is defined -- as some people wish -- as "the capacity to pass the Turing Test," then I would want to say that humans have something more than the intelligence that AI, no matter how advanced, will never have.

In an article for Evolution News, software architect Brendan Dixon wrote: "Computers do not play games like humans play games. Computers do not create like humans create. Computers, at their most fundamental level, do not even solve computational problems like humans solve computational problems." Dixon concluded: "The real problem with AI, then, is . . . the likelihood of our blindly depending on machines, lulled to trust them by bad metaphors. The danger is that computers will fail us, and possibly do so in very bad ways."(11)

Roger Epstein, a former editor of Psychology Today, also rejects the assumption that the brain works like a computer. He says:

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?(12)

A neural network can pick out a cat on a YouTube video, but it has no concept of what a cat is. We need once more to remind ourselves that we are not talking about conscious entities. AI expert Margaret Boden, FBA, writes:

Computers don't have goals of their own. The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That's why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.) Besides this, an AI program's "goals," "priorities" and "values" don't matter to the system. When DeepMind's AlphaGo beat the world champion Lee Sedol in 2016, it felt no satisfaction, still less exultation. And when the then-reigning chess program Stockfish 8 was trounced by AlphaZero a year later (even though AlphaZero had been given no data or advice about how humans play), it wasn't beset by disappointment or humiliation. Garry Kasparov, by contrast, was devastated when he was beaten at chess by IBM's Deep Blue in 1997. . .

Moreover, it makes no sense to imagine that future AI might have needs. They don't need sociality or respect in order to work well. A program either works, or it doesn't. For needs are intrinsic to, and their satisfaction is necessary for, autonomously existing systems – that is, living organisms. They can’t sensibly be ascribed to artefacts.(13)

The hype in this area is intensified by the fact that terms like "neural networks," "deep learning," and "machine learning" seem to imply the presence of human-like intelligence when these terms essentially refer to statistical methods used to extract probable patterns from huge datasets. The human brain is not a protein nanotech computer! Mathematician Hannah Fry makes a wry and apt comment:

For the time being, worrying about evil AI is a bit like worrying about overcrowding on Mars. Maybe one day we'll get to the point where computer intelligence surpasses human intelligence, but we're nowhere near it yet. Frankly, we’re still quite a long way away from creating hedgehog- level intelligence. So far, no one’s even managed to get past worm.(14)

Taken from 2084 by John Lennox Copyright © 2020 by Zondervan. Used by permission of Zondervan. www.zondervan.com

John Cairns

 

 

 

 

 

NOTES

7. Sean Carroll, The Big Picture: On the Origins of Life, Meaning, and the Uni- verse Itself (London: Oneworld, 2016), 3, 5.
8. Nick Bostrom, Superintelligence (Oxford: Oxford University Press, 2014), 23.
9. See my book God's Undertaker (Oxford: Lion, 2007).
10. "Chinese Room Argument," in The MIT Encyclopedia of the Cognitive Scienc- es, ed. Robert A. Wilson and Frank C. Keil (Cambridge, MA: MIT Press, 1999), 115.
11. Brendan Dixon, "No, Your Brain Isn't a Three-Pound Meat Computer," Evo- lution News, 20 May 2016, https://evolutionnews.org/2016/05/no_your_brain_i.
12. Roger Epstein, "The Empty Brain," Aeon, 18 May 2016, https://aeon.co
/essays/your-brain-does-not-process-information-and-it-is-not-a-computer, ital- ics original.
13. Margaret Boden, "Robot Says: Whatever," Aeon, 13 August 2018, https:// aeon.co/essays/the-robots-wont-take-over-because-they-couldnt-care-less.
14. Hannah Fry, Hello World: Being Human in the Age of Algorithms (New York: Norton, 2018), 12–13.

3 Responses to AI and what it means for humanity

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.