The following video gives a short preview of the Funzing talk on “Assessing the risks from super-intelligent AI” that I’ll be giving shortly:
- In Cardiff on Monday 26th November – see https://uk.funzing.com/funz/musk-zuckerberg-the-risks-of-super-a-i-cardiff-19657
- Other dates to be confirmed
Note: the music in this video is “Berlin Approval” from Jukedeck, a company that is “building tools that use cutting-edge musical artificial intelligence to assist creativity”. Create your own at http://jukedeck.com.
Transcript of the video:
Welcome. My name is David Wood, and I’d like to tell you about a talk I give for Funzing.
This talk looks at the potential rapid increase in the ability of Artificial Intelligence, also known as AI.
AI is everywhere nowadays, and it is, rightly, getting a lot of attention. But the AI of a few short years in the future could be MUCH more powerful than today’s AI. Is that going to be a good thing, or a bad thing?
Some people, like the entrepreneur Elon Musk, or the physicist Stephen Hawking, say we should be very worried about the growth of super artificial intelligence. It could be the worst thing that ever happened to humanity, they say. Without anyone intending it, we could all become the victims of some horrible bugs or design flaws in super artificial intelligence. You may have heard of the “blue screen of death”, when Windows crashes. Well, we could all be headed to some kind of “blue screen of megadeath”.
Other people, like the Facebook founder Mark Zuckerberg, say that it’s “irresponsible” to worry about the growth of super AI. Let’s hurry up and build better AI, they say, so we can use that super AI to solve major outstanding human problems like cancer, climate change, and economic inequality.
A third group of people say that discussing the rise of super AI is a distraction and it’s premature to do so now. It’s nothing we need to think about any time soon, they say. Instead, there are more pressing short-term issues that deserve our attention, like hidden biases in today’s AI algorithms, or the need to retrain people to change their jobs more quickly in the wake of the rise of automation.
In my talk, I’ll be helping you to understand the strengths and weaknesses of all three of these points of view. I’ll give reasons why, in as little as ten years, we could, perhaps, reach a super AI that goes way beyond human capability in every aspect. I’ll describe five ways in which that super AI could go disastrously wrong, due to lack of sufficient forethought and coordination about safety. And I’ll be reviewing some practical initiatives for how we can increase the chance of the growth of super AI being a very positive development for humanity, rather than a very negative one.
People who have seen my talk before have said that it’s easy to understand, it’s engaging, it’s fascinating, and it provides “much to think about”.
Image may be NSFW.
Clik here to view.
What makes my approach different to others who speak on this subject is the wide perspective I can apply. This comes from the twenty five years in which I was at the heart of the mobile computing and smartphone industries, during which time I saw at close hand the issues with developing and controlling very complicated system software. I also bring ten years of experience more recently, as chair of London Futurists, in running meetings at which the growth of AI has often been discussed by world-leading thinkers.
I consider myself a real-world futurist: I take the human and political dimensions of technology very seriously. I also consider myself to be a radical futurist, since I believe that the not-so-distant future could be very different from the present. And we need to think hard about it beforehand, to decide if we like that outcome or not.
The topic of super AI is too big and important to leave to technologists, or to business people. There are a lot of misunderstandings around, and my talk will help you see the key issues and opportunities more clearly than before. I look forward to seeing you there! Thanks for listening.