Hello, I'm Alex Hughes and this is the Instant Genius Podcast, a bite-sized masterclass from the BBC Science Focus magazine. Artificial intelligence is booming. It can be found everywhere you look. But what happens when we reach the end? Will AI take over our jobs? Will humans live a world of leisure? And what would be able to cope with such a technology-dependent world? We spoke to Nick Bostrom, Oxford University Professor, Director of the Future of Humanity Institute, an author of the new book Deep Utopia, Life and Meaning in the Solve World, to find out more.
Artificial intelligence has really blown up in the past couple of years. Do you think we're approaching a world of symbiotic living with AI? Or is that more of a science-fiction idea? I think it will continue to accelerate, even with all the recent coverage. I still think people haven't really woken up to what's coming down the pike. The world will be transformed. Now, whether this would be for the better or for the worse remains to be determined.
So in the new book, I consider the case what if it goes well? What kind of world do we end up with in that case? And what will be the role for us human beings in such a radically transformed world? And if things do go well and we end up in this world, what do you see it looking like? Well, I think the development of machine superintelligence will be the last invention we ever need to make. If you think of it, if we did really have machines that were better at all cognitive labour than the human brain is, then that would include, in particular, the task of making further inventions.
And so what it would mean is a kind of telescoping of the future. All these physically possible technologies that we can imagine that maybe we could develop if we had 20,000 years, maybe we would have cures for aging and space colonies and perfect virtual reality and all kinds of uploading minds into computer, all kinds of science fiction like stuff. That is not violating any law of physics. It's just like extremely hard to actually get it to work. But all of those things with machine superintelligence doing the research and development could happen within short order after the development of machine superintelligence.
So this long-term future might happen within a few years after you have machine superintelligence. So what I think we'll then have is a condition of technological maturity, a condition where we have developed all those technologies that we know to be physically possible. And that would be a very different condition for human beings to inhabit. And we can go into that in more detail. But it wouldn't just be another mobile internet or another solar panel revolution, like all these things that people get really excited about and hype up as the next big thing.
I think this would be qualitatively different. And if you wanted to find some parallel of the past, it would be more akin to the emergence of life on planet Earth or the first evolution of brains or something like that that it should be compared to. I was going to make the comparison to the internet and ask how you feel it compares to that. It's not just AI, but all of these new technologies like quantum computing, for example, that come with it. Where does that put humans? Do we then just become these people of leisure?
Yeah, basically I think then pretty much all human jobs could be automated with mature technology. The exceptions would be ones primarily where consumers have a direct preference that the job be done by human. So just as right now, sometimes consumers might pay a premium because of the way that some object was produced. So like sometimes a trinket, like, I mean, if it's made by some particularly favorite group or your favorite artist or an indigenous handicraft or something like that, even though the object itself is kind of indistinguishable from something made in a sweatshop in China, we might still pay more for it because we care about the process of origination.
So those would be potential areas where there might still be demand for human work. There are also certain things like a cleric, maybe people just want the wedding ceremony officiated by a human being rather than even if there were a robot that could say all the right things, we might prefer to watch human athletes compete, even if the robots could run faster or box harder or whatever. But setting aside these exceptions, I think all functionally defined tasks would be done cheaper and more efficiently and to a higher standard by machine in this condition.
So do you think it is more the human side of the world, you know, producing a mode of music or offering a human touch to something? That's what will stay. Well, I think even music is something that AI will be able to compose and perform to a higher standard than humans at technological maturity. But there are aspects of this where we might still value the human touch. Just as if you just wanted to listen to the sound of music, it's really cheap. You just stream it or whatever. And for pennies, you can listen to any music performed by the best human artists today, right? But people still pay hundreds of dollars for concert tickets or to hear a live performance. Maybe even an inferior, maybe you want to go to the local symphony orchestra to hear your Beethoven performed. And so there's this additional element of kind of the event that happening being together with other people forming the connection that people might value that would not be automatically rendered in a superior way by AI.
Although it is interesting, and I think that technological maturity, these artificial minds that we build could also be built such that they can experience emotions and have conscious experiences just like biological brains have. So even if a consumer wanted, say, the performer to not just play the right notes in the way that sounds perfect, but also for the performer to actually experience the music as they are performing it, that too might be something that a technological maturity machines could do. Although in that case, it's not clear whether machines is the right word to describe them, because we think of some sort of instantiate mechanical device, right? Where as here we are really building minds. So it actually goes quite deep once you start to think through.
So the first is like, well, you could automate a lot of stuff, so that would be fewer jobs for humans. So we wouldn't need to go into the office every day or whatever our jobs consist of. But there are a lot of other tasks as well that we currently have to perform just to get by in life for all kinds of instrumental reasons. So you need to go to the grocery in order to get the food that you can then spend time cooking in order to have some meal to eat, right? But those things could also be automated, right? Just as a rich person today could hire somebody to go to the grocery store and to do their cooking. Like this could be something everybody could afford by having the robot prepare their meals to perfection. And you can go through the list. So right now maybe you have to exert yourself by going to the gym if you want to stay fit. And the only way to stay fit is to put in the effort to work out. But at the technological maturity, you could pop a pill that would produce exactly the same outcome. And so a lot of instrumental effort, like things, actually, if you think about it, almost all we do throughout the day currently is we do something in order to achieve something else.
And practically all of that, that technologically maturity would go away. Like that would be no point if the only reason for doing it was that you were trying to obtain some other thing. Because if you start to think through these things case by case, you realize that it would be like a shortcut to getting the thing that wouldn't involve you. Having to spend the time and effort. There's a very classic trope in sci-fi where you reach utopia, everything is done for us and people end up disillusioned with a lack of things to do. No work, no real care for anything. Is there a risk of that happening here? Yeah. So if you really think through things to the logical conclusion, I think you do end up in this, I call it a post instrumental condition. And it really raises quite profound questions ultimately about the meaning of life, what we ultimately place value on. Which I'm exploring in this book.
And I mean, I think ultimately it looks like it could be a wonderful thing, but not in a sort of unproblematic way. It does force us to confront these ultimate value questions. In a sense, it's questions philosophers have wrestled with, you know, for thousands of years. But this, I mean, you could think of it as a thought experiment. Although I think there is a real chance we could actually end up living in this condition in a relatively short number of years, depending on how things unfold. We will actually be forced to answer these questions. So the book is less about trying to convince you of some particular conclusion, more encouraging the reader to ask certain questions or to think about things that are hard otherwise to sort of get into focus. And yeah, a lot of the structure that currently constrains what we can do would drop away in this kind of condition.
So much of what structures are alive is this kind of need for various instrumental forms of activity. We need to do this to do that and to get that. If you remove all of that, there is a real question of what remains of us humans. It's almost like if you have, you know, like an insect has an exoskeleton that kind of holds it together. And like all the squishy soft parts are kind of constrained by this. And so I'm thinking of these instrumental constraints that we're currently operating on as a kind of spiritual exoskeleton for the human soul. And if you imagine all of that removed, like how could we avoid just kind of becoming amorphous blob? Like a kind of drug dial, pleasure maximising a blob. And I think that's a really interesting thing to think about. And I'm kind of hopeful about us finding something really wonderful and beautiful in that space. But I think it does require this kind of confrontation with our ultimate values.
If we reach that point where we rethink our lives and it ends up being positive, we're no longer working to live. We're not overrun by tasks. You know, I think people describe as being quite limiting. What's your view of how we then rethink our lives? There are so many algorithms in our lives day to day. How do you even begin to approach free will when everything is decided for you? And you could decide on things yourself, but you might get inferior results. So right now, maybe if you want to get the thing that you actually most like and that would suit you the most in just like some shirt or whatever, like you might have to spend time and effort to browse different stores and try things on. And if you really want to have something that is perfect for you, there's maybe no choice but to do that. But in this condition, it might be that that would result in inferior garment.
Like maybe you'd be best off just leaving that to the AI recommender system to not just recommend things, but to just order it for you and just following along with the algorithm might give you superior results. So that you face this question of do you value this investment in going like shopping for clothes or whatever, because of it gives you better outcomes or is it the intrinsically value of the process itself, in which case it may not matter that the result is worse in terms of objective or functional criteria. So right now these go together, like whether you value the process of shopping for its own sake or whether you value it at least in part because it produces some like meaningful outcome to you. Those kind of both point in the same direction, you still have to go to shopping whether you want it for one reason or for another, but in this condition of technological maturity in a post-instrumental world these come apart. If you only do it because you value the outcome, that would be absolutely no reason for you to continue doing it. So it's a kind of asset that dissolves a lot of assumptions and you then can separate the components.
So in that sense it is kind of laboratory for thinking about our values as well and you can begin to distinguish and that also can maybe cast a new light on the current condition like where all of these things are modeled together but you might be able to see things more clearly what you value by first taking them to this extreme condition. It's almost like a particle collider like the physicists have where they create unusual conditions like extreme energies like billions of electron balls smashing particles together in order then to see what happens on those special circumstances, things that are normally clumped together come apart and then you can analyze that and then you realize that well even on the normal conditions all those same parts are still there. It's just they are not distinguishable because they are sort of too. So by starting like these general principles by considering them under extreme conditions that's like often a kind of useful analytic technique.
If we end up in this world where AI is a huge part of what we do and it decides a lot of things what do we do at this stage to prepare for that? What regulations and laws do we need to consider now? So this condition that the book explores is a solved world where we have everything has gone well basically we have this almost medical technology now. Currently we are definitely not in a solved world. There's a lot of problems that still need solving.
So how to get from here to there is a big practical question. The book just brackets that but of course that is something I have a lot of thoughts about. I think well with AI in particular there is the big problem of how to align these increasingly powerful AI systems that we are developing how to develop algorithms and control methods so that even as they become more and more capable we can still actually steer them and get them to do what we want and to be safe. This is something that used to be very neglected for like I mean I started getting interested in the 90s and my previous book Superintelligence was trying to bring attention to this alignment problem. Now it has become sort of an established field.
All the leading AI labs have like groups working on scalable AI alignment and in the last couple of years even top-level policymakers have started to take an interest in AI and AGI artificial general intelligence and the kind of potential risks associated with that. So that would be like one big thing that we need to sort out. That's primarily a technical problem and then there is a governance problem like if you imagine we have this increasingly powerful AI technology and we are able technically to point it wherever we want but then like who decides where it gets pointed what are the purposes for which this powerful technology gets used and it's really a general-purpose technology. It's like in that sense like electricity or internal combustion or something that it can be used for good and bad even more general than those technologies.
所有领先的 AI 实验室都有团队在研究可扩展的 AI 对齐问题。在过去的几年里,甚至顶级决策者也开始对 AI 和通用人工智能(AGI)及其潜在风险感兴趣。所以,这是我们需要解决的一个重大问题。这主要是一个技术问题,但也存在治理问题。假设我们拥有越来越强大的 AI 技术,并且从技术上讲我们能够控制它的使用方向,但问题是谁来决定它的使用方向,以及这种强大技术将用于什么目的。它实际上是一种通用技术,像电力或内燃机一样,可以用于好的或坏的用途,实际上比这些技术的应用更广泛。
So how can we sort of tilt the balance towards positive usage rather than using this technology to wage war against each other or to oppress each other or for all kinds of other. How can we primarily shift it towards positive uses like medicine or clean energy and better entertainment like all kinds of. So that's more like a political challenge and then I think there is a third big area of challenge which has yet received much less attention but I think will become increasingly important. So if the first area is like how can we make sure the AI doesn’t harm us and the second is how can we make sure we don’t harm each other using AI's.
The third is how can we make sure that we don’t harm the AI's. This is less of an issue if you have simple AI's that are just sort of mindless simple algorithms like a pocket calculator. It doesn’t you know maybe it's sad for the owner if somebody smashes your pocket calculator but it doesn’t matter to the pocket but as we build increasingly sophisticated digital minds I think some of those will attain various degrees of moral status whether because they’re become sentient capable of having conscious awareness and suffering pain or discomfort or they have other attributes perhaps that underpin moral status like having a conception of self having preferences being able to engage in reciprocal social relationships and so forth.
I think then it becomes important that the future is one where things also go well for these digital minds that we create and that we don’t replicate in the digital realm say the current misuses and abuses of sentient animals in animal agriculture like pigs and other creatures that I think we are not currently treating the way we should and AI systems could you know eventually become even more kind of advanced and sophisticated even than animals I mean eventually human-like and maybe even beyond human-like and since the future might well contain ultimately more digital minds than biological minds that is a really key thing as well to ensure. So those would be three big areas related to existential risk and then of course with AI and then of course there are many many others and more here now issues like making sure people get an income preventing you know misinformation privacy violations discrimination all kinds of things that are more continuous with other things we are having to to worry about and struggle within society.
There is a long list of things that we need to work out to get to this point one that I'm intrigued to get your opinion on is how we get enough energy to get all of these AI systems computers training systems how do you get all of that running? Yeah it'll probably be increased demand for energy I mean also the electrification of transportation and self-driving cars and stuff more and more parts of the world seem to run on electricity and AI will contribute to that. Of course ultimately it’s a technological problem to develop cheaper and greener forms of energy which certainly AI’s would be able to do in this scenario right like if they can run the solar panel manufacturing facilities and develop the next generation of clean tech etc.
Ultimately we have space which I mean is where most of the stuff is so if we think of a technologically mature civilization I think in the long term would be very ordered to imagine it confined to planet earth which is just one little crumb in an almost endless space of resources and stars and quasars and galaxies and so there could be a huge expansion of human slash AI civilization out covering a large chunk of our future life-con ultimately.
There's a lot of people that are quite rightfully worried about the future of AI. What would you say to people to comfort them about what the future might hold? Well I mean I'm not really trying to do that. I think of my role Morris trying to understand what is going on and what might happen if there are things we can do to sort of increase the odds of a positive outcome. It's not really like a self-help book in that sense or like I kind of feel good but I do think there's a lot we don't understand about how things will go.
I mean we've never had an machine intelligence transition before right? We've never inhabited a technological immature world. We've never developed super intelligence and there's kind of a limit to how much we can predict about these things and so as long as there is ignorance there is hope. Could turn out to go better than we fear it will and I think the jury's still very much out and most of the uncertainty I would think is uncertainty about the intrinsic difficulty of the challenge we face. There is also some uncertainty about the degree to which we will get our act together and certainly if we make a good effort, if we do the research properly on the alignment and good people work to ensure we get smart and compassionate governance solutions etc that shifts the odds to the favorable side but there is still the big unknown which is we don't really know how hard the challenge is.
Is it the case that any civilization that makes some sort of at least half-assed effort like basically it kind of self-corrects and eventually you get a good outcome or is it or we kind of doomed no matter what? This is just like a challenge you know five levels above what we humans are capable of and we can try but it's still we don't really know where on that spectrum this challenge is. So yeah and I think that's like yeah it's a weird situation to be and if this picture of the world is correct where we are relatively close to this critical juncture in human history has been going for thousands and thousands of years right then so many people have lived and died you know most of them kind of a hundred gatherers of farmers and that right now like you and I should be sitting just right next to this big fulcrum of cosmic history like that's kind of odd if that's the way it is.
Thank you for listening to this episode Instant Genius. That was Nick Bostrom on the future of AI. The Instant Genius podcast is brought to you by the Team Hunt BBC Science Focus magazine which you can find on sale now in supermarkets and news agents as well as on your preferred app store. Alternatively you can come and find us online sciencefocus.com.