Why Machines will become Hyper-Intelligent before Humans do
These notes detail some of the reasons why it will be much easier to build hyper-intelligent non-organic machines, than to dramatically enhance individual human intelligence – why SI (super intelligences) will appear before radical human IA (intelligence augmentation).
In this context, super/ hyper intelligence refers to general intelligence far exceeding current top human abilities in (at least) the areas of abstract thinking, innovation, and practical problem solving; Systems obviously clever enough to understand human language and affairs, science and technology, and – crucially – to be able to radically enhance their own design.
Preamble - Setting the context:
(for a wider overview see: Advanced Intelligence: SI, IA, and the Global Brain)
Intelligence, versus knowledge and tools (embedded knowledge/ ability) – Intelligence is a measure of what can be accomplished utilizing given knowledge and tools. It includes the ability to develop novel ideas and tools from an existing base. Abstract, inductive thought and creativity are crucial aspects of this.
Individual versus collective smarts/ Maximum versus aggregate intelligence – Individual, maximum intelligence is an overall limiting factor. It imposes limits on the maximum complexity of problems that can be effectively tackled. Quantity can ultimately not make up for quality: A million monkeys can not solve problems that individual humans can - A million porn or Pokemon messages will not cure cancer (see note1). This is not to say that group interaction and collaboration add nothing - they clearly do increase overall smarts.
Rates of change in individual human intelligence, machine 'intelligence', and overall human ability -
- Maximum individual intelligence has remained about constant over the past two thousand years: Aristotle - or someone as clever as him - would have formulated a 'Theory of Relativity' 2000 years ago given Einstein's knowledge base
- Overall human knowledge and ability - smarts - has been, and continues to, accumulate and increase exponentially
- Machine intelligence is only just starting to emerge. The last fifty years have seen exponential improvements. I predict that machine progress will become hyper-exponential once near human-level intelligence is reached (causing a self-improvement feedback loop – more details below)
Will machines remain extensions of our capabilities – tools, under our control? It seems unlikely. Illustration: Imagine that we engineered or bred animals for maximum intelligence. To the extent that we succeeded, they would likely disagree with some of our goals and agendas. For example, one can easily imagine an 'Animal Liberation' movement.
Would SIs at least be supportive of our goals? If intelligence in machines does increase much faster than in humans, then their relationship to us (positive, negative, or neutral) will determine whether we will eventually be able to catch-up/ integrate. However, there would be some lag (though not necessarily very long) between the time machines become smarter than us, and when they become smart enough to uplift us – dramatically upgrading brains is a harder problem than achieving nanotech or curing ageing.
This leads to the questions: Is it in fact easier to dramatically increase machine or human intelligence? Can technological advances in intelligence be integrated into humans as fast as they can be applied to machines? Naturally, underlying this discussion is the assumption that SI is possible
Advantages of working with designed instead of evolved systems - machines, rather than brains:
Engineered solutions are much easier to understand, modify, enhance, and debug. Also, we don't have to limit ourselves to the single solution to intelligence created by a blind, unconscious Watchmaker with his own agenda (survival) -
- We can capitalize on our intellectual/ engineering strengths instead of struggling with nature's designs (Bird/ plane analogy 1: thrust Vs flapping wings)
- Some specific engineering advantages:
- A designed AI, unlike the brain, has comprehensible design documentation
- Can be highly modular – much less need for multiple functionality, and high inter-dependency of systems
- Has a more flow-chart like, logical design - evolution has no foresight
- Can be designed with debugging aids – evolution didn't need that
- Machines have neither the evolutionary baggage, nor additional complexity for epigenesis, reproduction, and integrated self-repair
Advantages of working with artificial systems instead of humans - hardware/ software, rather than wetware:
Artificial systems development offers more flexibility, ease of design, speed, scalability, and better financial return -
- Machine design offers a much wider range of possible materials and techniques than a biological substrate – but including everything we learn from nature/ brains (Bird/ plane analogy 2: helicopters & jet-engines)
- Some things are easy in artificial systems, hard or impossible in humans:
- Artificial systems can re-booted and hacked. It is practically possible to try millions of different designs
- Test runs are not limited by biological/ neuronal learning and processing speeds
- Unlike computers, brains have no simple speed/ capacity upgrade path
- Knowledge acquisition is much simpler for machines - almost all human knowledge is (or soon will be) available in native machine format on the Internet. Note that machines will have access to practically all of human knowledge and intelligence! Over the Internet, they will be able to get any information or help that we can get.
- Artificial systems can easily be duplicated and mass-produced. Existing data and skills can be loaded instantaneously
- Machines offer 7/24 operation and, unlike humans, will do whatever we program/ ask them to do (up to a point!)
- Some crucial aspects of intelligence are much easier to achieve and improve in machines than humans: focus, concentration, logic, statistics, keeping track of logic paths, instant access to large databases, deleting bad info (data & procedures)....
- Market forces (price/ performance) – It is ultimately much cheaper to build machines to do advanced thinking than to upgrade, train and motivate humans. The market will grow seamlessly from very specific, repetitive (mainly mechanical) applications, to domain-specific cognitive tasks, to flexible, generally intelligent systems.
- Seed AI design: A machine can inherently be designed to more easily understand and improve its own functioning – ie. Bootstrapping intelligence
Capitalizing on a massive hardware overhang:
There are several ways in which current and near-term processing capacity can be leveraged much more effectively (millions of times), thus mitigating apparent performance limitations. Capitalizing on this overhang will benefit machine development much more than human augmentation efforts (scalability, self-improvement, etc) -
- Optimizing systems for AI requirements, and drastically reducing inefficiencies/ 'bloat':
- Many current designs are open to huge improvements in opsys and language efficiency (x 100 or more)
- Improving application software efficiency (x 100 or more) – many are built to be 'good enough' (price/ performance, time-to-market, legacy considerations, etc.)
- Specialized AI hardware/ software designs can offer many orders of magnitude improvement (eg. FPGAs, or other massively parallel systems)
- Only a tiny fraction of computing power available in the world is currently used for ('real') AI research. Once AI achieves some mediocre level of general intelligence, financial incentives will be substantial to make much more capacity available. Idle (Internet) computer capacity is another possible source.
- Using the right conceptual design – Once we discover key feature(s) required for general intelligence, we can design enormously more effective programs, and also start to capitalize on this massive hardware overhang (Bird/ plane analogy 3: shape of wing crucial to produce lift). Note, that even without conceptual breakthroughs or magic bullets there will positive self-improvement feedback (computers helping to design themselves) - progress will just be slower.
- Currently, only a handful of researchers in the world are actually working on designing and building systems with general intelligence (see note 2). One this trickle turns into a flood, a lot of additional effective processing power will become available (one way or another)
Illustration: Imagine that a widely accepted, workable theory/ design for AI existed, and that there was a particularly urgent need for it (avert imminent meteor strike): I am sure that we could 'find' millions of times more effective processing power than is currently employed for AI (by the means listed above). I predict that much better theories will soon emerge, and that commercial forces alone will have much of the driving force as imminent disaster.
Some additional problems with radical human intelligence enhancements/ upgrades:
- The human brain is already highly optimized by thousands of generations – difficult to improve further.
- Substantial difficulties with experimentation, development, and implementation:
- Practical/ technical - see above: re-boot, hack, duplicate, increase speed, etc.
- Social opposition – eg 'Frankenfood', human genetic engineering
- Regulatory – eg. FDA restrictions, costs, delays - government bans
- Brain design/ structure does not lend itself well to integrating with existing digital data and communications (data formats, serial communication).
- Working with biological cell/ axon/ receptor structures essentially requires advanced biological nanotech. More generally, the level of computer technology needed to overcome IA's technical problems will already at or near SI level – ie. SI will be at the hard-take-off point. In other words: Computers smart enough to help us dramatically enhance human brains will be more than smart enough to radically enhance themselves. A related point is that to the extent that we figure out the workings of the brain, this knowledge can (and is) immediately used to improve artificial systems.
Notes -
1) - A similar constraint applies to machines: Maximum intelligence of individual designs will ultimately determine the level of overall smarts in a network. Furthermore, tightly coupled machine clusters/ units are likely to be most intelligent because of the importance of coordinated, high-bandwidth meta-cognition and sub-process communication.
2) - Who is working on 'real' AI? Of all the people working in the field called 'AI'...
80% don't believe in the concept of General Intelligence (but instead, in a large collection of specific skills & knowledge).
Of those that do, 80% don't believe that (super) human-level intelligence is possible - either ever, or for long, long time.
Of those that do, 80% work on domain specific AI projects for commercial or academic-political reasons (results are a lot quicker).
Of those left, 80% have a poor conceptual framework....
Peter Voss, June 2001