Many think that technological intelligence will soon affect human life more than we have ever imagined. These advancements may even be able to fundamentally transform what it means to be human.
Is this really possible? Is it already happening?
Author, futurist, and renowned inventor Ray Kurzweil remains optimistic – about both the inevitability of such a tech explosion, as well as our ability to use it to our advantage.
Kurzweil is an avid supporter of the technological singularity theory. The theory asserts that:
- There is a point coming in the near future where technological intelligence will far exceed the current limited human cognitive capacity.
- This will rapidly create greater and greater artificial intelligences at an exponential rate.
- This will greatly affect human life – even change what all humans can do.
Kurzweil predicts the technological singularity will occur in 2045. He also thinks we can use this intelligence explosion for good by linking up with the technology and harnessing its power in many mind boggling ways.
For example, Kurzweil thinks the possibility of effectively increasing the human lifespan may already be here – a task seriously attempted since the roots of Daoism in ancient China.
Looking Critically at the Singularity
There are critics – of both the theory’s inevitability, and the assumption that it would be a good thing. Let us start with its inevitability.
Note how the theory relies on one extremely important premise:
Technological advancement can continue to increase at exponential levels, eventually reaching heights far above what human beings can currently comprehend.
But what, exactly, is the basis for this assumption? Simply because we have reached a certain point of advancement, and continue to increase our advancement, why assume we can reach greater and greater levels of intelligence in the future? Do we not possess only a limited capacity to create this intelligence? Won’t the speed of advancement level out at some point?
And if not – if our technology becomes more advanced than human cognition – how are we to remain confident that we will be able to understand it, control it, or even be positively affected by it?
Questions such as these remain puzzling for skeptical and careful minds.
Can Computers Really Become “Human”?
The Chinese Room Argument
Criticisms of artificial intelligence (A.I.) are nothing new. In 1980, American philosopher John Searle first made the argument against A.I. being able to reach a level of having a “mind” or “consciousness.” He did so with his famous Chinese Room argument.
Searle argued that while a computer may be programmed to respond correctly to a Chinese speaker, even trick the speaker into thinking it was another Chinese speaker, it still could not possess the intentionality to truly understand the conversation. While the program may be tapping in to the correct syntax, the true semantics and meaning of the conversation cannot be grasped.
Thus, true artificial intelligence is not possible, according to Searle. His point was well received, and remained relatively unchallenged for quite some time.
The reason is that many of us agree with Searle’s sentiment. Artificial intelligence may reach exponentially higher levels, but most people don’t think it could ever reach the level played with in films like Short Circuit, where robot Johnny Five somehow magically feels human emotion.
Indeed, for most of us, there seems to be something distinctly human – something private, relative, psychological, and inexplicable – that no machine could ever capture. We intuitively feel that these ‘qualia’ of human experience could never be grasped by artificial Intelligence.
Or could they?
Redirecting the Argument
Much has changed since 1980. We aren’t dealing with basic computer programs anymore, and many have since challenged Searle’s basic point.
But what if philosophers have simply been approaching the problem from the wrong perspective? What if subjective human experience is ultimately a moot point, if in function we get technological results that exceed current human capacity?
Let us assume A.I. will never possess the qualia of subjective consciousness. What if it can still give us the tools to understand how to improve our own capacities, to the extent that we change the potential for subjective consciousness?
This is one of the most interesting questions to be asked about the technological singularity theory. The potentials of human enhancement in the near future are fecundate with possibility, yet also ripe with potential danger.
It is Approaching . . .
Practically speaking, the transformation has already begun. Yet our limited human perspective makes it hard for us to adequately see it and understand it.
However, if we reflect historically on the synthesis of technological progress from just fifty years ago, we begin to realize we are already doing things we once thought impossible. In essence, human life has already drastically evolved.
The future scientific, philosophical, and ethical implications of these technological advances are incredibly hard to pinpoint and predict. Futurists disagree about whether or not the tech explosion will be a good or bad thing. With enhanced technology often comes increased risk (and potential reward).
Consider this: is life better or worse than it was fifty years ago? The answer depends on your perspective.
Regardless, it seems the greater all of our understandings of the intricacies new technology, the greater our ability will be to make the correct scientific, philosophical, and ethical decisions regarding that technology.
Accept the Inevitable. Unleash the Possible.
As scary as the whole process may be, a sound strategy for us to adopt now is simply to accept the inevitable, and remain as informed as possible. How far A.I. will advance is unknown, but preparation is rarely a bad thing.
Unless we wish to completely detach from the technology (which it seems most don’t), the idea of escaping these advancements is unrealistic. Further, the idea of using them to our advantage may be critical to our potential or even our survival.