“The development of full artificial intelligence
could spell the end of the human race”
Prof Stephen Hawking
[AI is] “our biggest existential threat”
Elon Musk
Statements like these have been made by a number of prominent personalities and thought leaders in recent years, which has no doubt increased the level of anxiety that the general public have about AI powered machines. What lies at the heart of this anxiety is the fear that as machines become smarter and their capacity for “thinking” and “analysis” increases, there will be an inevitable moment when those machines will become more intelligent than humans. This event is commonly described as the technological singularity.
The Gap-filling Route to the Technological Singularity
Many of the arguments that support the inevitability of the technological singularity are founded on the comparison of human intelligence with some future computer-based super intelligence. Typically, this argument goes something like this:
“computers can already outperform humans in a number of cognitive tasks, and, given the increasing pace of technological development, it is inevitable that there will come a time when they will outperform humans in all areas of cognitive competence”
One way of illustrating this inevitability that is used by some of the proponents of the technical singularity is shown in the figure below. The inner patterned circle represents the full range of human cognitive abilities, with the distance from the centre representing the degree of cognitive competence humans possess to perform specific tasks. The circle segments or shards that extend beyond the inner circle represent the areas where machines currently outperform humans in specific tasks. The inevitability of the technological singularity is illustrated by observing that technological advancement will enable machines to eventually and inevitably fill in the gaps between the shards.
Whilst it is true that computers can out-perform humans in some very specific areas (e.g. Alpha Go and Deep Blue each beating the world champion at their respective games), there are three underlying assumptions of the proposed gap filling route to ‘super intelligence’ that need to be exposed.
1) ‘Machines will Inevitably Out-perform Humans in Everything‘
The first assumption is that machines will inevitably be able to out-perform humans in all aspects of their cognitive capacity. To put this assumption into perspective, it should be noted that in all of the cases where machines out-performing humans on cognitive tasks, the task itself has facilitated the use of the computational rather than cognitive advantage that machines have over humans. AlphaGo, for example, used a large game tree to anticipate and evaluate many more viable future move sequences than its human opponent ever could. This kind of cognitive task lends itself to superior algorithmic solutions.
The same is true for other areas where large data sets or significant numbers of options have to be computed. This is what computers are particularly good at. But one cannot assume that all, or even a significant proportion of cognitive tasks that humans perform well will afford this kind of computational advantage.
2) ‘Human Cognitive Capacity is Bounded‘
The second key assumption of the ‘gap-filling’ route to the technological singularity is that human cognitive capacity is bounded, fixed and slow to adapt relative to the speed of technological development. There are, however, some credible claims emerging from neuroscience studies that suggest that the brain has a limitless capacity in terms of the number of different thoughts it could entertain. This relates to the thesis that the brain is driven by something called ‘chaotic dynamics’, an area of research that I have previously contributed to myself.
Chaotic phenomena have some fascinating properties which we don’t have the space to cover here. One of them is that it typically involves an infinite number of distinct dynamic states. If these dynamic states correspond to thoughts or memories, and if the brain really is driven by chaotic dynamics, then it is likely that the brain has an infinite capacity for unique thoughts and cognitive activity.
Whilst there is no conclusive neuroscience proof that the dynamics of the brain are chaotic, one cannot simply assume that human cognitive capacity is bounded, that the mind is constrained to a fixed number of thoughts and thought processes.
This assumption underlies the claim that human cognitive capacity is slow to develop relative to the speed of technological advancement. Typically, that refers to the physical development of the brain that takes place at a slow evolutionary pace. But this neglects the agility and adaptability of the mind of the individual that is already facilitated by their brain. The brain’s structure does not need to keep pace with technological development for it to outperform machine intelligence on a broad range of cognitive tasks.
3) ‘Robot Autonomy is Equivalent to Human Autonomy‘
The third, and perhaps most significant assumption that underlies the ‘gap-filling’ route to the technological singularity relates to human versus machine autonomy. The ability to think and act freely and independently is a fundamental facet of human cognitive capacity. It is a primary source of human creativity and originality which enables us to develop new ways of thinking and acting within the world.
Machine autonomy, in contrast, is subject to the limits of algorithmic specifications and is consequently very limited in comparison to human free-will. I have much more to say on this that will have to be left to another post. In my view, though, cognitive autonomy is a gap that will never be filled by intelligent machines.
So the gap-filling route to the technological singularity seem to me to be highly improbable and I think it should be relegated to the category of a widely held, but false belief; i.e. a myth.