Here’s another short (<10 minute) video from me, building on one of the topics I’ve listed in the Humanity+ Agenda: the case for artificial general intelligence (AGI).
The discipline of having to fit a set of thoughts into a ten minute video is a good one!
Further reading: I’ve covered some of the same topics, in more depth, in previous blogposts, including:
For anyone who prefers to read the material as text, I append an approximate transcript.
My name is David Wood. I’m going to cover some reasons for paying more attention to Artificial General Intelligence, AGI, – also known as super-human machine intelligence. This field deserves significantly more analysis, resourcing, and funding, over the coming decade.
Machines with super-human levels of general intelligence will include hardware and software, as part of a network of connected intelligence. Their task will be to analyse huge amounts of data, review hypotheses about this data, discern patterns, propose new hypotheses, propose experiments which will provide valuable new data, and in this way, recommend actions to solve problems or take advantage of opportunities.
If that sounds too general, I’ll have some specific examples in a moment, but the point is to create a reasoning system that is, indeed, applicable to a wide range of problems. That’s why it’s called Artificial General Intelligence.
In this way, these machines will provide a powerful supplement to existing human reasoning.
Here are some of the deep human problems that could benefit from the assistance of enormous silicon super-brains:
- What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
- What are the causes of different diseases – and how can we cure them?
- Can we predict earthquakes– and even prevent them?
- Are there safe geo-engineering methods that will head off the threat of global warming, without nasty side effects?
- What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
- Which existential risks – risks that could drastically impact human civilisation – deserve the most attention?
You get the idea. I’m sure you could add some of your own favourite questions to this list.
Some people may say that this is an unrealistic vision. So, in answer, let me spell out the factors I see as enabling this kind of super-intelligence within the next few decades. First is the accelerating pace of improvements in computer hardware.
This chart is from University of London researcher Shane Legg. On a log-axis, it shows the exponentially increasing power of super-computers, all the way from 1960 to the present day and beyond. It shows FLOPS – the number of floating point operations per second that a computer can do. It goes all the way from kiloflops through megaflops, gigaflops, teraflops, petaflops, and is pointing towards exaflops. If this trend continues, we’ll soon have supercomputers with at least as much computational power as a human brain. Perhaps within less than 20 years.
But will this trend continue? Of course, there are often slowdowns in technological progress. Skyscraper heights and the speeds of passenger airlines are two examples. The slowdown can sometimes be for intrinsic technical difficulties, but is more often because of lack of sufficient customer interest or public interest in even bigger or faster products. After all, the technical skills that took mankind to the moon in 1969 could have taken us to Mars long before now, if there had been sufficient continuing public interest.
Specifically, in the case of Moore’s Law for exponentially increasing hardware power, industry experts from companies like Intel state that they can foresee at least 10 more years’ continuation of this trend, and there have plenty of ideas for innovative techniques to extend it even further. It comes down to two things:
- Is there sufficient public motivation in continuing this work?
- And can some associated system integration issues be solved?
Mention of system issues brings me back to the list of factors enabling major progress with super-intelligence. Next is improvement with software. There’s lots of scope here. There’s also additional power from networking ever larger numbers of computer together. Another factor is the ever-increasing number of people with engineering skills, around the world, who are able to contribute to this area. We have more and more graduates in relevant topics all the time. Provided they can work together constructively, the rate of progress should increase. We can also learn more about the structure of intelligence by analysing biological brains at ever finer levels of detail – by scanning and model-building. Last, but not least, we have the question of motivation.
As an example of the difference that a big surge in motivation can make, consider the example of progress with another grand, historical engineering challenge – powered flight.
This example comes from Artificial Intelligence researcher J Storr Halls in his book “Beyond AI”. People who had ideas about powered flight were, for centuries, regarded as cranks and madmen – a bit like people who, in our present day, have ideas about superhuman machine intelligence. Finally, after many false starts, the Wright brothers made the necessary engineering breakthroughs at the start of the last century. But even after they first flew, the field of aircraft engineering remained a sleepy backwater for five more years, while the Wright brothers kept quiet about their work and secured patent protection. They did some sensational public demos in 1908, in Paris and in America. Overnight, aviation went from a screwball hobby to the rage of the age and kept that status for decades. Huge public interest drove remarkable developments. It will be the same with demonstrated breakthroughs with artificial general intelligence.
Indeed, the motivation for studying artificial intelligence is growing all the time. In addition to the deep human problems I mentioned earlier, we have a range of commercially-significant motivations that will drive business interest in this area. This includes ongoing improvements in search, language translation, intelligent user interfaces, games design, and spam detection systems – where there’s already a rapid “arms race” between writers of ever more intelligent “bots” and people who seek to detect and neutralise these bots.
AGI is also commercially important to reduce costs from support call systems, and to make robots more appealing in a wide variety of contexts. Some people will be motivated to study AGI for more philosophical reasons, such as to research ideas about minds and consciousness, to explore the possibility of uploading human consciousness into computer systems, and for the sheer joy of creating new life forms. Last, there’s also the powerful driver that if you think a competitor may be near to a breakthrough in this area, you’re more likely to redouble your efforts. That adds up to a lot of motivation.
To put this on a diagram:
- We have increasing awareness of human-level reasons for developing AGI.
- We also have maturing sub-components for AGI, including improved algorithms, improved models of the mind, and improved hardware.
- With the Internet and open collaboration, we have an improved support infrastructure for AGI research.
- Then, as mentioned before, we have powerful commercial motivations.
- Adding everything up, we should see more and more people working in this space.
- And it should see rapid progress in the coming decade.
An increased focus on Artificial General Intelligence is part of what I’m calling the Humanity+ Agenda. This is a set of 20 inter-linked priority areas for the next decade, spread over five themes: Health+, Education+, Technology+, Society+, and Humanity+. Progress in the various areas should reinforce and support progress in other areas.
I’ve listed Artificial General Intelligence as part of the project to substantially improve our ability to reason and learn: Education+. One factor that strongly feeds into AGI is improvements with ICT – including improvements in both ongoing hardware and software. If you’re not sure what to study or which field to work in, ICT should be high on your list of fields to consider. You can also consider the broader topic of helping to publicise information about accelerating technology – so that more and more people become aware of the associated opportunities, risks, context, and options. To be clear, there are risks as well as opportunities in all these areas. Artificial General Intelligence could have huge downsides as well as huge upsides, if not managed wisely. But that’s a topic for another day.
In the meantime, I eagerly look forward to working with AGIs to help address all of the top priorities listed as part of the Humanity+ Agenda.
