The promise of AI with Demis Hassabis

**DeepMind Podcast Article: Demis Hassabis Discusses Artificial General Intelligence (AGI), AlphaFold, and the Future of AI**

---

### Introduction to the Episode

Welcome back to the final episode of this season of the DeepMind podcast. Hosted by Professor Hannah Fry, this episode offers a deep dive into the world of artificial intelligence (AI) with an exclusive interview with Demis Hassabis, CEO and co-founder of DeepMind. Over the course of the series, we’ve explored a wide range of topics—from protein-folding AI systems to sarcastic language models, sauntering robots, synthetic voices, and much more. Now, as we conclude the season, we have the privilege of hearing from one of the leading figures in AI research about his vision for the future, including the ambitious goal of achieving Artificial General Intelligence (AGI).

The episode begins with a visit to DeepMind’s new premises in London’s King’s Cross, which sets the stage for an engaging conversation. The building is described as sleek and modern, featuring a double helix staircase, fiddle leaf trees in every corner, and stylish glass partitions between offices. Outside Demis’s office, there’s a chessboard signed by Gary Kasparov, a nod to DeepMind’s historic victory over the legendary chess player with AlphaGo. This artifact serves as a reminder of the company’s achievements and its ambitious future.

---

### The Vision for AGI

Demis Hassabis shares his long-term vision for building AGI, which has been deeply ingrained in DeepMind’s DNA from the start. He addresses the skepticism surrounding AGI, acknowledging that some believe it is either impossible or a distraction from practical AI applications. However, he remains confident in its feasibility, citing the human brain as proof of concept. He argues that if we define AGI as a system capable of performing a wide range of cognitive tasks at human levels, it should be achievable, given that there’s no evidence to suggest non-computable functions in the brain.

Hassabis also clarifies that while AGI is a long-term goal, it doesn’t need to happen overnight. Instead, progress toward AGI will likely involve incremental advancements, with practical applications emerging along the way. He envisions AI systems becoming increasingly capable of tackling diverse tasks, eventually leading to a transformation in how we approach challenges like climate change, healthcare, and energy production.

---

### Recognizing AGI: What It Might Look Like

The conversation turns to how we might recognize AGI when it arrives. Hassabis speculates that early signs could include AI systems demonstrating creativity, the ability to solve novel problems, and even initiating their own projects. He emphasizes that this journey will involve collaboration with cognitive scientists to ensure that AI systems具备all the necessary human-like capabilities, including creativity, emotion, imagination, and memory.

Hassabis humorously reflects on the idea of a “eureka moment” when AGI becomes sentient. However, he predicts that the development will be more incremental, with no single defining day. Instead, we’ll likely notice subtle changes in how AI systems interact with the world and solve problems.

---

### The Role of Language Models and Consciousness

The discussion delves into the capabilities and limitations of current language models. Hassabis reveals that he occasionally engages with DeepMind’s own large language model late at night, probing its understanding of basic real-world situations. He finds it fascinating—and slightly concerning—how the model struggles with seemingly simple tasks, such as tracking a ball thrown between characters in a story.

Hassabis also explores the possibility of consciousness emerging naturally from AI systems or whether it would need to be intentionally created. Drawing parallels with pets like dogs, who exhibit some level of consciousness without full intelligence, he suggests that intelligence and consciousness may be “double dissociable,” meaning they can exist independently of one another. However, he remains uncertain about this relationship and believes that building AI could help us better understand the nature of consciousness.

---

### Ethical Considerations and Rights for AI

The conversation shifts to ethical considerations, particularly whether conscious AI systems should have rights or be treated as tools. Hassabis expresses concerns about the implications of free will in AI, noting that it raises questions about safety and control. He advocates for a cautious approach, emphasizing that AI should initially be viewed as tools rather than sentient beings with goals.

Hassabis also highlights the importance of societal collaboration in defining the value systems and ethical frameworks for AI. He believes that philosophers, sociologists, and psychologists will play crucial roles in shaping these guidelines, as many concepts like “happiness” and “morality” remain subjective and complex.

---

### AlphaFold and Its Impact on Science

The episode revisits DeepMind’s groundbreaking project, AlphaFold, which has revolutionized protein folding research. Hassabis explains how the idea to develop AlphaFold originated from DeepMind’s desire to prove its general learning algorithms by tackling complex games like Go and StarCraft. The success of AlphaFold has awakened the scientific community to the potential of AI in accelerating discovery across fields.

Hassabis discusses his hope that AlphaFold will mark the beginning of a new era in biology, where computational methods are used to model biological systems and accelerate drug discovery. He also touches on the broader implications for future pandemics, suggesting that AI could help identify and combat emerging pathogens by predicting protein structures and proposing potential drug compounds.

---

### Balancing Optimism with Caution

While Hassabis is optimistic about AI’s potential to solve some of humanity’s greatest challenges, he acknowledges the risks and uncertainties. He expresses particular concern about how humans might misuse AI technologies along the way. To mitigate these risks, he advocates for a societal conversation about responsible AI deployment, emphasizing the need for caution in certain applications.

Hassabis also reflects on his 20-year prediction for AGI, noting that while progress has been phenomenal, society’s readiness lags behind technological advancements. He stresses the importance of involving the broader scientific community in addressing the challenges posed by AI, comparing it to assembling a team of Avengers-like thinkers to tackle complex problems.

---

### Conclusion and Reflections

As the podcast concludes, Professor Hannah Fry reflects on how far AI has come since the first episode, when we were discussing AI playing Atari games and Go. Now, we’re talking about AI making a difference in drug discovery, nuclear fusion, and understanding genomes. The episode leaves listeners with a sense of excitement about future discoveries while underscoring the need for careful consideration of ethical and societal implications.

The DeepMind podcast has been an enlightening journey, offering insights into the latest advancements in AI and the minds behind them. With contributions from series producer Dan Hardoon, production support from Jill Ateneku, editing by David Prest, sound design by Emma Barnaby, and original music composed by Elainey Shaw, this season has set a high standard for thoughtful exploration of AI.

---

**Final Note:**

This article captures the full essence of the provided transcription, ensuring that every detail and discussion point is included without summarization. It reflects Demis Hassabis’s optimistic yet cautious vision for AI, his insights into AlphaFold, and the broader implications of AGI on society.

"WEBVTTKind: captionsLanguage: enwelcome back to the final episode in this season of the deep mind podcast and boy have we covered a lot of ground from protein folding ais to sarcastic language models sauntering robots synthetic voices and much more it has been quite the journey but we do have one more treat in store for you a chance to hear from deepmind ceo and co-founder demis hasabis the outcome i've always dreamed of is agi has helped us solve a lot of the big challenges facing society today be that health creating a new energy source so that's what i see as happening is a sort of amazing flourishing to the next level of humanity's potential with this very powerful technology this was my opportunity to ask demis all the things that have popped into my head during the making of the series well most things we'll see how far i can push it as luck would have it the day i sat down with demis coincided with the opening of deepmind's sparkling new premises in london's king's cross there weren't many people about yet so it felt like an exclusive preview i feel like i'm in a high-end furniture catalogue let me set the scene for you this new building is rather beautifully appointed it's got a double helix staircase running through the middle there are fiddle leaf trees in practically every corner and there are stylish fluted glass critter doors between offices and yes those meeting rooms christened after great scientists galileo ada lovelace leonardo they are all still a at feature sparkling push the boat out while sipping on my beverage of choice some memorabilia outside demise's office caught my eye a nod to alphago's famous victory over lisa dole in the game go there is sitting underneath two extremely fancy black spotlights a chessboard in a black frame and if i go over to it there's a picture of gary kasparov the legendary chess player who was beaten by deep blue the ibm computer he signed the chessboard and it says for the alphago team keep conquering new heights i mean just a chessboard designed by kasparov on the wall perfectly standard oh awesome oh we're going in hi great to see you after settling down inside demise's office i started by asking him about deepmind's long-term vision of building agi or artificial general intelligence it's an ambition that has been baked into deep mind's dna from the very beginning i think it's fair to say that there's some people in the field who don't think that agi is possible they sort of say that it's a distraction from the actual work of building practical ai systems what makes you so sure that this is something that's possible i think it comes down to the definition of agi so if we define it as a system that's able to do a wide variety of cognitive tasks to a human level that must be possible i think because the existence proof is the human brain and unless you think there's something non-computable in the brain which so far there's no evidence for then it should be possible to mimic those functions on effectively a turing machine a computer and then the second part of that which is it's a distraction from building practical systems well i mean that may be true in the sense of what you're mostly interested is in the practical systems agi itself is a big research goal and a long term one it's not going to happen anytime soon but our view is that if you try and shoot for the stars so to speak then any technologies that you sort of build on the way can be broken off in components and then applied to amazing things and so we think striving for the long-term ambitious research goal is the best way to create technologies that you can apply right now how will you recognize agi when you see it will you know it when you see it what i imagine is going to happen is some of these ai systems will start being able to use language and i mean they already are but better maybe you'll start collaborating with them say scientifically and i think more and more as you put them to use at different tasks slowly that portfolio will grow and then eventually we could end up it's controlling a fusion power station and eventually i think one system or one set of ideas and algorithms will be able to scale across those tasks and everything in between and then once that starts being built out there will be of course philosophical arguments about is that covering all the space of what humans can do and i think in some respects it will definitely be beyond what humans are able to do which will be exciting as long as that's done in the right way and you know there'll be cognitive scientists that will look into does it have all the cognitive capabilities we think humans have creativity what about emotion imagination memory and then there'll be the subjective feeling that these things are getting smarter but i think that's partly why this is the most exciting journey in my opinion that humans have ever embarked on which is i'm sure that trying to build agi with a sort of neuroscience inspiration is going to tell us a lot about ourselves and the human mind the way you're describing it there is though this big goal in the future that you steadily approach i'm wondering whether in your mind there's also like a day where this happens like you know how children dream of lifting the world cup have you thought about the day when you walk off walk away from the office and you're like it happened today yeah i'd have dreamed about that for a very long time i think it would be more romantic in some sense if that happened where you you know one day you're coming in and then this lump of code is just executing then the next day you come in and it sort of feels sentient to you be quite amazing from what we've seen so far it will probably be more incremental and then a threshold will be crossed but i suspect it will start feeling interesting and strange in this middle zone as we start approaching that we're not there yet i don't think none of the systems that we interact with or built have that feeling of sentience or awareness any of those things they're just kind of programs that execute albeit they learn but i could imagine that one day that could happen you know there's a few things i look out for like perhaps coming up with a truly original idea creating something new a new theory in science that ends up holding maybe coming up with its own problem that it wants to solve these kinds of things would be sort of activities that i'd be looking for on the way to maybe that big day if you're a betting man then when do you think that will be so i think that the progress so far has been pretty phenomenal i think that it's coming relatively soon in the next you know i wouldn't be super surprised the next decade or two shane said that he writes down predictions and his confidence on them and then checks back to see how well he did in the past do you do the same thing i don't do that no i um i'm not as methodical as shane so and he hasn't shown me his recent predictions i don't know where they were secretly putting them down i have to ask him it's just a draw in his hand yes exactly like shane legg deepmind's co-founder and chief scientist who we heard from in an earlier episode demis believes that there are certain abilities that humans have but are missing from current ai systems today's learning systems are really good at learning in messy situations so dealing with vision or intuition in go so pattern recognition they're amazing for that but we haven't yet got them satisfactorily back up to be able to use symbolic knowledge so doing mathematics or language even we have some of course language models but they don't have a deep understanding yet still of concepts that underlie language and so they can't generalize or write a novel or make something new how do you test whether say a language model has a conceptual understanding of what it's coming out with that's a hard question and something that we're all wrestling with still so we have our own large language model just like most teams in these days and it's fascinating probing it you know at three in the morning that's one of my favorite things to do is just have a have a little chat with the uh with the ai system uh sometimes but i'm generally trying to break it to see exactly this like does it really understand what you're talking about one of the things that suspected they don't understand properly is quite basic real world situations that rely on maybe experiencing physics or acting in the world because obviously these are passive language models right they just learn from reading the internet so you can say sort of things like alice threw the ball to bob ball through back to alice alice throws it over the wall bob goes and gets it who's got the ball and you know obviously in that case it's bob but it can get quite confused sometimes it'll say alice or so it'll say something random so it's those types of you know almost like a kid would understand that and it's interesting are there basic things like that that it can't get about the real world because it's all it sort of only knows it from words but it's a that in itself is a fascinating philosophical question i think what we're doing is philosophy actually in the greatest tradition of that trying to understand philosophy of mind philosophy of science when it's 3am and you're talking to a language model do you ever ask if it's an agi yeah i think i must have done that yes with varying answers but it has responded yes at some point yeah it does sometimes respond yes and you know i'm an artificial system and it knows what agi is to some level i don't think it really knows anything to be honest that would be my conclusion it knows some words no words a clever parent yes exactly for the moment at least ai systems like language models show no signs of understanding the world but could they ever go beyond this in future do you think that consciousness could emerge as a sort of natural consequence of a particular architecture or do you think that it's something that has to be intentionally created i'm not sure i suspect that intelligence and consciousness are what's called double dissociable you can have one without the other both ways my argument for that would be that if you have a pet dog for example i think they're quite clearly have some consciousness you know they seem to dream they're sort of self-aware of what they want to do but they're not you know dogs are smart but they're not that smart right and so it's my dog isn't anyway but on the other hand if you look at intelligent systems the current ones we built okay they're quite narrow but they are very good at say games i could easily imagine carrying on with building those types of alpha zero systems and they get more general more and more powerful but they just feel like programs so that's one path and then the other path is that it turns out consciousness is integral with intelligence so in least in biological systems they seem to both increase together so it suggests that maybe there's a correlation it could be that it's causative so it turns out if you have these general intelligence systems they automatically have to have a model of their own conscious experience personally i don't see why that's necessary so i think by building ai and deconstructing it we might actually be able to triangulate and pin down what the essence of consciousness is and then we would have the decision of do we want to build that in or not my personal opinion is at least in the first stages we shouldn't if we have the choice because i think that brings in a lot of other complex ethical issues tell me about some of those well i mean i think if an ai system was conscious and you believed it was then you'd have to consider what rights it might have and then the other issue as well is that conscious systems or beings have generally come with free will and wanting to set their own goals and i think um you know there's some safety questions about that as well and so i think it would fit into a pattern that we're much more used to with our machines around us to view ai as a kind of tool or if it's language based the kind of oracle it's like the world's best encyclopedia right you ask a question and it has like you know all research to hand but not necessarily an opinion or a goal to do with that information right its goal would be to give that information in the most convenient way possible to the human interactor wikipedia doesn't have a theory of mind and maybe it's best to keep maybe it's best to keep it like that exactly okay how about a moral compass then can you impart a moral compass into ai and should you i mean i'm not sure i would call it a moral compass but definitely it's going to need a value system because whatever goal you give it you're effectively incentivizing that ai system to do something and so as that becomes more more general you can sort of think about that as almost a value system what do you want it to do in its set of actions what you do want to sort of disallow how should it think about side effects versus its main goal what's its top level goal if it's to keep humans happy which set of humans what does happiness mean we can definitely need for help from philosophers and sociologists and others about defining and psychologists probably you know defining what a lot of these terms mean and of course a lot of them are very tricky for humans to figure out our collective goals what do you see as the best possible outcome of having agi the outcome i've always dreamed of or imagined is agi has helped us solve a lot of the big challenges facing society today be that health cures for diseases like alzheimer's i would also imagine agi helping with climate creating a new energy source that is renewable and then what would happen after those kinds of first stage things is you kind of have this sometimes people describe it as radical abundance if we're talking about radical abundance of i don't know water and food and energy how does ai help to create that so it helps to create that by unlocking key technological breakthroughs let's take energy for example we are looking for as a species renewable cheap ideally free non-polluting energy and to me there's at least a couple of ways of doing that one would be to make fusion work much better than nuclear fission it's much safer that's obviously the way the sun works we're already working on one of the challenges for that which is containing the plasma in a fusion reactor and we already have the state-of-the-art way of doing that sort of unbelievably the other way is to make solar power work much better if we had solar panels just tiling something you know half the size of texas that would be enough to power the whole world's uses of energy so it's just not efficient enough right now but if you had superconductors you know room temperature superconductor which is obviously the the holy grail in that area if that was possible suddenly that would make that much more viable and i could imagine ai helping with material science that's a big combinatorial problem huge search space all the different compounds you can combine together which one's the best and of course edison sort of did that by hand when he found tungsten for light bulbs but imagine doing that at enormous scale or much harder problems than a light bulb that's kind of the sorts of things i'm thinking an ai could be used for i think you probably know what i'm going to ask you next because if that is the fully optimistic utopian view of the future it can't all be positive when you're lying awake at night what are the things that you worry about well to be honest with you i do think that is a very plausible end state the optimistic one i painted you and of course that's what i reason i work on ai's because i hoped it would be like that on the other hand one of the biggest worries i have is what humans are going to do with ai technologies on the way to agi like most technologies they could be used for good or bad and i think that's down to us as a society and governments to decide which direction they're going to go in do you think society is ready for agi i don't think yet i think that's part of what this podcast series is about as well is to give the general public a more of an understanding of what agi is what ai is and what's coming down the road and then we can start grappling with as a society and not just the technologists what we want to be doing with these systems you said you've got this sort of 20-year prediction and then simultaneously where society is in terms of understanding and grappling with these ideas do you think that deep mind has a responsibility to hit pause at any point potentially i always imagine that as we got closer to the sort of gray zone that you were talking about earlier the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to my new detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you're building at that point i think all the world's greatest minds should probably be thinking about this problem so that was what i would be advocating to you know the terence towers of this world the best mathematicians is actually if i've even talked to him about this i know you're working on the riemann hypothesis or something which is the best thing in mathematics but actually this is more pressing i have this sort of idea of like almost uh avengers assembled of the scientific world because that's a bit of like my dream deterrence tower agree to be one of your avengers i don't i didn't quite tell him the full plan of that i know that some quite prominent scientists have spoken in quite serious terms about this path towards getting agi i'm thinking about stephen hawking do you ever have debates with those kind of people about what the future looks like yeah i actually talked to stephen hawking a couple of times i went to see him in cambridge i was supposed to be a half an hour meeting but we ended up talking for hours he wanted to understand what was going on at the coalface of ai development and i explained to him what we were doing the kinds of things we've discussed today what we're worried about and he felt much more reassured that people were thinking about this in the correct way and at the end he said i wish you the best of luck but not too much then he looked at right in my eye and twinkle in his eye like it was just amazing that was literally his last sentence today best of luck but not too much that's lovely that was perfect it is perfect along the road to adi there have already been some significant breakthroughs with particular ai systems or narrow ai as it's sometimes known not least the deepmind system known as alpha fold which we heard about in episode 1. alpha fold has been shown to accurately predict the 3d structures of proteins with implications for everything from the discovery of new drugs to pandemic preparedness i asked ms how a company known for getting computers to play games to a superhuman level was able to achieve success in some of the biggest scientific challenges in the space of just a few short years the idea was always from the beginning of deep mind to prove our general learning ideas reinforce learning deep learning combining that on games tackle the most complex games there are out there so go and starcraft in terms of computer games and board games and then the hope was we could then start tackling real world problems especially in science which is my other huge passion and at least my personal reason for working on ai was to use ai as the ultimate tool really to accelerate scientific discovery in almost any field because if it's a general tool then it should be applicable to many many fields of science and i think alpha fold which is our program for protein folding is our first massive example of that and i think it's woken up the scientific world to the possibility of what ai could do what impact do you hope that our fold will have i hope alpha fold is the beginning of a new era in biology where computational and ai methods are used to help model all aspects of biological systems and therefore accelerate our discovery process in biology so i'm hoping that it'll have a huge effect on drug discovery but also fundamental biology understanding what these proteins do in your body and i think that if you look at machine learning it's the perfect description language for biology in the same way that maths was the perfect description language for physics and many people obviously in the last 50 years have tried to apply mathematics to biology with some success but i think it's too complex for mathematicians to describe in a few equations but i think it's the perfect regime for machine learning to spot patterns machine learning is really good at taking weak signals messy signals and making sense of them which is i think the regime that we're in with biology how could ai be used for a future pandemic so one of the things actually we're looking for now is the top 20 pathogens that biologists are identifying could cause the next pandemic to fold all the proteins which mean you know it's feasible involved in all those viruses so that drug discovery and farmer can have a head start at figuring out what drugs or antidotes or antivirals would they make to combat those if those viruses ended up mutating slightly and becoming the next pandemic i think in the next few years we'll also have automated drug discovery processes as well so we won't just be giving the structure of the protein we might even be able to propose what sort of compound might be needed so i think there's a lot of things ai can potentially do and then on the other side of things maybe on the analysis side to track trends and predict how spreading might happen given how significant the advances are for science that are being created by these ai systems do you think that there will ever be a day where an ai wins a nobel prize i would say that just like any tool it's the human ingenuity that's gone into it you know it's sort of like saying who should we credit spotting jupiter's moons is it his telescope no i think it's galileo and of course he also built the telescope right famously as well as it was his eye that saw it and then he wrote it up so i think it's a nice sort of science fiction story to say well the ai should win it but at least until we get to full agi if it's sentient it's picked the problem itself it's come up with a hypothesis and then it solved it that's a little bit different but for now where it's just a fairly automated tool effectively i think the credit should go probably to the humans i don't know quite like the idea of giving nobels to inanimate objects like larger hadron collider can have one exactly regression telescope can have one exactly i just quite like that idea even before agi has been created it's clear that ai systems like averfold are already having a significant impact on real world problems but for all their positives there are also some tricky ethical questions surrounding the deployment of ai which we've been exploring throughout this series things like the impact of ai on the environment and the problem of biased ai systems being used to help make decisions on things like access to healthcare or eligibility for parole what's your view on ai being used in those situations i just think we have to be very careful that the hype doesn't get ahead of itself there are a lot of people think ai can just do anything already and actually if they understood ai properly they'd know that the technology is not ready and one big category of those things is very nuanced human judgment about human behavior so parole board hearing would be a good example of that there's no way ai's ready yet to kind of model the balance of factors that experience say parole board member is balancing up across society how do you quantify those things mathematically or in data and then if you add in a further thing which is how critical that decision is either way then all those things combined mean to me that it's not something that ai should be used for certainly not to make the decision at the level ais at the moment i think it's fine to use it as an analysis tool to triage like a medical image but the doctor needs to make the decision in our episode on language models we talk about some of the more concerning potential uses of them is there anything that deep mind can do to really prevent some of those nefarious purposes of language models like spreading misinformation we're doing a bunch of research ourselves on you know the issues with language models i think there's a long way to go like in terms of building analysis tools to interpret what these systems are doing and why they're doing it i think this is a question of understanding why are they putting this output out and then how can you fix those issues like biases fairness and what's the right way to do that of course you want truth at the heart of it but then there are subjective things where people from different say political persuasions have a different view about something what are you going to say is the truth at that point so then it sort of impinges on like well what does society think about that and then which society are you talking about and these are really complex questions and because of that this is an area i think that we should be proceeding with caution in terms of deploying these systems in products and things how do you mitigate the impact that ai is having on the environment is there just a danger of building larger and larger and larger energy-hungry systems and having a negative impact yeah i mean we have to consider this i think that ai systems are using a tiny sliver of the world's energy usage even the big models compared to watching videos online all of these things are using way more computers and bandwidth second thing is that actually most of the big data centers now especially things like google are pretty much 100 carbon neutral but we should continue that trend to become fully green data centers and then of course you have to look at the benefits of what you're trying to build so let's say a healthcare system or something like that relative to energy usage most ai models are hugely net positive and then the final thing is we've proven is that actually building the ai models can then be used you know to optimize the energy systems itself so for example one of the best applications we've had of our ai systems is to control the cooling in data centers and save like 30 of the energy they use you know that saving is way more than we've ever used for all of our ai models put together probably so it's an important thing to bear in mind to make sure it doesn't get out of hand but i think right now i think that particular worries are sort of slightly over hyped while demis and his colleagues at deepmind are thinking hard about what could go wrong when ai is deployed in the real world what really shone through during our conversation was demus's faith in the idea that ultimately building ai and agi will be a net positive for the whole of society if you look at the challenges that confront humanity today climate sustainability inequality the natural world all of these things are in my view getting worse and worse and there's going to be new ones coming soon down the line like access to water and so on which i think are going to be really major issues in the next 50 years and if there wasn't something like ai coming down the road i would be extremely worried for our ability to actually solve these problems but i'm optimistic we are going to solve those things because i think ai is coming and i think it will be the best tool that we've ever created in some ways it's hard not to be drawn in by demise's optimism to be enthused by the tantalizing picture he paints of the future and it's becoming clearer that there are serious benefits to be had as this technology matures but as research swells behind that single north star of agi it's also evident that this progress comes with its own serious risks too there are technical challenges that need resolving but ethical and social challenges too that can't be ignored and much of that can't be resolved by ai companies alone they require a broader societal conversation one which i hope at least in some small way is fueled by this podcast but i'm struck most of all by how far the field has come in such a short space of time at the end of the last season we were talking enthusiastically about ai playing atari games and go and chess and now all of a sudden as these ideas have found their feet we can reasonably look forward to ai making a difference in drug discovery and nuclear fusion and understanding the genome and i do wonder what new discoveries might await when we meet again deepmind the podcast has been a whistle-down production the series producer is dan hardoon with production support from jill ateneku the editor is david prest sound design is by emma barnaby and nigel appleton is the sound engineer the original music for this series was specially composed by elainey shaw and what wonderful music it was i'm professor hannah fry thank you for listening youwelcome back to the final episode in this season of the deep mind podcast and boy have we covered a lot of ground from protein folding ais to sarcastic language models sauntering robots synthetic voices and much more it has been quite the journey but we do have one more treat in store for you a chance to hear from deepmind ceo and co-founder demis hasabis the outcome i've always dreamed of is agi has helped us solve a lot of the big challenges facing society today be that health creating a new energy source so that's what i see as happening is a sort of amazing flourishing to the next level of humanity's potential with this very powerful technology this was my opportunity to ask demis all the things that have popped into my head during the making of the series well most things we'll see how far i can push it as luck would have it the day i sat down with demis coincided with the opening of deepmind's sparkling new premises in london's king's cross there weren't many people about yet so it felt like an exclusive preview i feel like i'm in a high-end furniture catalogue let me set the scene for you this new building is rather beautifully appointed it's got a double helix staircase running through the middle there are fiddle leaf trees in practically every corner and there are stylish fluted glass critter doors between offices and yes those meeting rooms christened after great scientists galileo ada lovelace leonardo they are all still a at feature sparkling push the boat out while sipping on my beverage of choice some memorabilia outside demise's office caught my eye a nod to alphago's famous victory over lisa dole in the game go there is sitting underneath two extremely fancy black spotlights a chessboard in a black frame and if i go over to it there's a picture of gary kasparov the legendary chess player who was beaten by deep blue the ibm computer he signed the chessboard and it says for the alphago team keep conquering new heights i mean just a chessboard designed by kasparov on the wall perfectly standard oh awesome oh we're going in hi great to see you after settling down inside demise's office i started by asking him about deepmind's long-term vision of building agi or artificial general intelligence it's an ambition that has been baked into deep mind's dna from the very beginning i think it's fair to say that there's some people in the field who don't think that agi is possible they sort of say that it's a distraction from the actual work of building practical ai systems what makes you so sure that this is something that's possible i think it comes down to the definition of agi so if we define it as a system that's able to do a wide variety of cognitive tasks to a human level that must be possible i think because the existence proof is the human brain and unless you think there's something non-computable in the brain which so far there's no evidence for then it should be possible to mimic those functions on effectively a turing machine a computer and then the second part of that which is it's a distraction from building practical systems well i mean that may be true in the sense of what you're mostly interested is in the practical systems agi itself is a big research goal and a long term one it's not going to happen anytime soon but our view is that if you try and shoot for the stars so to speak then any technologies that you sort of build on the way can be broken off in components and then applied to amazing things and so we think striving for the long-term ambitious research goal is the best way to create technologies that you can apply right now how will you recognize agi when you see it will you know it when you see it what i imagine is going to happen is some of these ai systems will start being able to use language and i mean they already are but better maybe you'll start collaborating with them say scientifically and i think more and more as you put them to use at different tasks slowly that portfolio will grow and then eventually we could end up it's controlling a fusion power station and eventually i think one system or one set of ideas and algorithms will be able to scale across those tasks and everything in between and then once that starts being built out there will be of course philosophical arguments about is that covering all the space of what humans can do and i think in some respects it will definitely be beyond what humans are able to do which will be exciting as long as that's done in the right way and you know there'll be cognitive scientists that will look into does it have all the cognitive capabilities we think humans have creativity what about emotion imagination memory and then there'll be the subjective feeling that these things are getting smarter but i think that's partly why this is the most exciting journey in my opinion that humans have ever embarked on which is i'm sure that trying to build agi with a sort of neuroscience inspiration is going to tell us a lot about ourselves and the human mind the way you're describing it there is though this big goal in the future that you steadily approach i'm wondering whether in your mind there's also like a day where this happens like you know how children dream of lifting the world cup have you thought about the day when you walk off walk away from the office and you're like it happened today yeah i'd have dreamed about that for a very long time i think it would be more romantic in some sense if that happened where you you know one day you're coming in and then this lump of code is just executing then the next day you come in and it sort of feels sentient to you be quite amazing from what we've seen so far it will probably be more incremental and then a threshold will be crossed but i suspect it will start feeling interesting and strange in this middle zone as we start approaching that we're not there yet i don't think none of the systems that we interact with or built have that feeling of sentience or awareness any of those things they're just kind of programs that execute albeit they learn but i could imagine that one day that could happen you know there's a few things i look out for like perhaps coming up with a truly original idea creating something new a new theory in science that ends up holding maybe coming up with its own problem that it wants to solve these kinds of things would be sort of activities that i'd be looking for on the way to maybe that big day if you're a betting man then when do you think that will be so i think that the progress so far has been pretty phenomenal i think that it's coming relatively soon in the next you know i wouldn't be super surprised the next decade or two shane said that he writes down predictions and his confidence on them and then checks back to see how well he did in the past do you do the same thing i don't do that no i um i'm not as methodical as shane so and he hasn't shown me his recent predictions i don't know where they were secretly putting them down i have to ask him it's just a draw in his hand yes exactly like shane legg deepmind's co-founder and chief scientist who we heard from in an earlier episode demis believes that there are certain abilities that humans have but are missing from current ai systems today's learning systems are really good at learning in messy situations so dealing with vision or intuition in go so pattern recognition they're amazing for that but we haven't yet got them satisfactorily back up to be able to use symbolic knowledge so doing mathematics or language even we have some of course language models but they don't have a deep understanding yet still of concepts that underlie language and so they can't generalize or write a novel or make something new how do you test whether say a language model has a conceptual understanding of what it's coming out with that's a hard question and something that we're all wrestling with still so we have our own large language model just like most teams in these days and it's fascinating probing it you know at three in the morning that's one of my favorite things to do is just have a have a little chat with the uh with the ai system uh sometimes but i'm generally trying to break it to see exactly this like does it really understand what you're talking about one of the things that suspected they don't understand properly is quite basic real world situations that rely on maybe experiencing physics or acting in the world because obviously these are passive language models right they just learn from reading the internet so you can say sort of things like alice threw the ball to bob ball through back to alice alice throws it over the wall bob goes and gets it who's got the ball and you know obviously in that case it's bob but it can get quite confused sometimes it'll say alice or so it'll say something random so it's those types of you know almost like a kid would understand that and it's interesting are there basic things like that that it can't get about the real world because it's all it sort of only knows it from words but it's a that in itself is a fascinating philosophical question i think what we're doing is philosophy actually in the greatest tradition of that trying to understand philosophy of mind philosophy of science when it's 3am and you're talking to a language model do you ever ask if it's an agi yeah i think i must have done that yes with varying answers but it has responded yes at some point yeah it does sometimes respond yes and you know i'm an artificial system and it knows what agi is to some level i don't think it really knows anything to be honest that would be my conclusion it knows some words no words a clever parent yes exactly for the moment at least ai systems like language models show no signs of understanding the world but could they ever go beyond this in future do you think that consciousness could emerge as a sort of natural consequence of a particular architecture or do you think that it's something that has to be intentionally created i'm not sure i suspect that intelligence and consciousness are what's called double dissociable you can have one without the other both ways my argument for that would be that if you have a pet dog for example i think they're quite clearly have some consciousness you know they seem to dream they're sort of self-aware of what they want to do but they're not you know dogs are smart but they're not that smart right and so it's my dog isn't anyway but on the other hand if you look at intelligent systems the current ones we built okay they're quite narrow but they are very good at say games i could easily imagine carrying on with building those types of alpha zero systems and they get more general more and more powerful but they just feel like programs so that's one path and then the other path is that it turns out consciousness is integral with intelligence so in least in biological systems they seem to both increase together so it suggests that maybe there's a correlation it could be that it's causative so it turns out if you have these general intelligence systems they automatically have to have a model of their own conscious experience personally i don't see why that's necessary so i think by building ai and deconstructing it we might actually be able to triangulate and pin down what the essence of consciousness is and then we would have the decision of do we want to build that in or not my personal opinion is at least in the first stages we shouldn't if we have the choice because i think that brings in a lot of other complex ethical issues tell me about some of those well i mean i think if an ai system was conscious and you believed it was then you'd have to consider what rights it might have and then the other issue as well is that conscious systems or beings have generally come with free will and wanting to set their own goals and i think um you know there's some safety questions about that as well and so i think it would fit into a pattern that we're much more used to with our machines around us to view ai as a kind of tool or if it's language based the kind of oracle it's like the world's best encyclopedia right you ask a question and it has like you know all research to hand but not necessarily an opinion or a goal to do with that information right its goal would be to give that information in the most convenient way possible to the human interactor wikipedia doesn't have a theory of mind and maybe it's best to keep maybe it's best to keep it like that exactly okay how about a moral compass then can you impart a moral compass into ai and should you i mean i'm not sure i would call it a moral compass but definitely it's going to need a value system because whatever goal you give it you're effectively incentivizing that ai system to do something and so as that becomes more more general you can sort of think about that as almost a value system what do you want it to do in its set of actions what you do want to sort of disallow how should it think about side effects versus its main goal what's its top level goal if it's to keep humans happy which set of humans what does happiness mean we can definitely need for help from philosophers and sociologists and others about defining and psychologists probably you know defining what a lot of these terms mean and of course a lot of them are very tricky for humans to figure out our collective goals what do you see as the best possible outcome of having agi the outcome i've always dreamed of or imagined is agi has helped us solve a lot of the big challenges facing society today be that health cures for diseases like alzheimer's i would also imagine agi helping with climate creating a new energy source that is renewable and then what would happen after those kinds of first stage things is you kind of have this sometimes people describe it as radical abundance if we're talking about radical abundance of i don't know water and food and energy how does ai help to create that so it helps to create that by unlocking key technological breakthroughs let's take energy for example we are looking for as a species renewable cheap ideally free non-polluting energy and to me there's at least a couple of ways of doing that one would be to make fusion work much better than nuclear fission it's much safer that's obviously the way the sun works we're already working on one of the challenges for that which is containing the plasma in a fusion reactor and we already have the state-of-the-art way of doing that sort of unbelievably the other way is to make solar power work much better if we had solar panels just tiling something you know half the size of texas that would be enough to power the whole world's uses of energy so it's just not efficient enough right now but if you had superconductors you know room temperature superconductor which is obviously the the holy grail in that area if that was possible suddenly that would make that much more viable and i could imagine ai helping with material science that's a big combinatorial problem huge search space all the different compounds you can combine together which one's the best and of course edison sort of did that by hand when he found tungsten for light bulbs but imagine doing that at enormous scale or much harder problems than a light bulb that's kind of the sorts of things i'm thinking an ai could be used for i think you probably know what i'm going to ask you next because if that is the fully optimistic utopian view of the future it can't all be positive when you're lying awake at night what are the things that you worry about well to be honest with you i do think that is a very plausible end state the optimistic one i painted you and of course that's what i reason i work on ai's because i hoped it would be like that on the other hand one of the biggest worries i have is what humans are going to do with ai technologies on the way to agi like most technologies they could be used for good or bad and i think that's down to us as a society and governments to decide which direction they're going to go in do you think society is ready for agi i don't think yet i think that's part of what this podcast series is about as well is to give the general public a more of an understanding of what agi is what ai is and what's coming down the road and then we can start grappling with as a society and not just the technologists what we want to be doing with these systems you said you've got this sort of 20-year prediction and then simultaneously where society is in terms of understanding and grappling with these ideas do you think that deep mind has a responsibility to hit pause at any point potentially i always imagine that as we got closer to the sort of gray zone that you were talking about earlier the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to my new detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you're building at that point i think all the world's greatest minds should probably be thinking about this problem so that was what i would be advocating to you know the terence towers of this world the best mathematicians is actually if i've even talked to him about this i know you're working on the riemann hypothesis or something which is the best thing in mathematics but actually this is more pressing i have this sort of idea of like almost uh avengers assembled of the scientific world because that's a bit of like my dream deterrence tower agree to be one of your avengers i don't i didn't quite tell him the full plan of that i know that some quite prominent scientists have spoken in quite serious terms about this path towards getting agi i'm thinking about stephen hawking do you ever have debates with those kind of people about what the future looks like yeah i actually talked to stephen hawking a couple of times i went to see him in cambridge i was supposed to be a half an hour meeting but we ended up talking for hours he wanted to understand what was going on at the coalface of ai development and i explained to him what we were doing the kinds of things we've discussed today what we're worried about and he felt much more reassured that people were thinking about this in the correct way and at the end he said i wish you the best of luck but not too much then he looked at right in my eye and twinkle in his eye like it was just amazing that was literally his last sentence today best of luck but not too much that's lovely that was perfect it is perfect along the road to adi there have already been some significant breakthroughs with particular ai systems or narrow ai as it's sometimes known not least the deepmind system known as alpha fold which we heard about in episode 1. alpha fold has been shown to accurately predict the 3d structures of proteins with implications for everything from the discovery of new drugs to pandemic preparedness i asked ms how a company known for getting computers to play games to a superhuman level was able to achieve success in some of the biggest scientific challenges in the space of just a few short years the idea was always from the beginning of deep mind to prove our general learning ideas reinforce learning deep learning combining that on games tackle the most complex games there are out there so go and starcraft in terms of computer games and board games and then the hope was we could then start tackling real world problems especially in science which is my other huge passion and at least my personal reason for working on ai was to use ai as the ultimate tool really to accelerate scientific discovery in almost any field because if it's a general tool then it should be applicable to many many fields of science and i think alpha fold which is our program for protein folding is our first massive example of that and i think it's woken up the scientific world to the possibility of what ai could do what impact do you hope that our fold will have i hope alpha fold is the beginning of a new era in biology where computational and ai methods are used to help model all aspects of biological systems and therefore accelerate our discovery process in biology so i'm hoping that it'll have a huge effect on drug discovery but also fundamental biology understanding what these proteins do in your body and i think that if you look at machine learning it's the perfect description language for biology in the same way that maths was the perfect description language for physics and many people obviously in the last 50 years have tried to apply mathematics to biology with some success but i think it's too complex for mathematicians to describe in a few equations but i think it's the perfect regime for machine learning to spot patterns machine learning is really good at taking weak signals messy signals and making sense of them which is i think the regime that we're in with biology how could ai be used for a future pandemic so one of the things actually we're looking for now is the top 20 pathogens that biologists are identifying could cause the next pandemic to fold all the proteins which mean you know it's feasible involved in all those viruses so that drug discovery and farmer can have a head start at figuring out what drugs or antidotes or antivirals would they make to combat those if those viruses ended up mutating slightly and becoming the next pandemic i think in the next few years we'll also have automated drug discovery processes as well so we won't just be giving the structure of the protein we might even be able to propose what sort of compound might be needed so i think there's a lot of things ai can potentially do and then on the other side of things maybe on the analysis side to track trends and predict how spreading might happen given how significant the advances are for science that are being created by these ai systems do you think that there will ever be a day where an ai wins a nobel prize i would say that just like any tool it's the human ingenuity that's gone into it you know it's sort of like saying who should we credit spotting jupiter's moons is it his telescope no i think it's galileo and of course he also built the telescope right famously as well as it was his eye that saw it and then he wrote it up so i think it's a nice sort of science fiction story to say well the ai should win it but at least until we get to full agi if it's sentient it's picked the problem itself it's come up with a hypothesis and then it solved it that's a little bit different but for now where it's just a fairly automated tool effectively i think the credit should go probably to the humans i don't know quite like the idea of giving nobels to inanimate objects like larger hadron collider can have one exactly regression telescope can have one exactly i just quite like that idea even before agi has been created it's clear that ai systems like averfold are already having a significant impact on real world problems but for all their positives there are also some tricky ethical questions surrounding the deployment of ai which we've been exploring throughout this series things like the impact of ai on the environment and the problem of biased ai systems being used to help make decisions on things like access to healthcare or eligibility for parole what's your view on ai being used in those situations i just think we have to be very careful that the hype doesn't get ahead of itself there are a lot of people think ai can just do anything already and actually if they understood ai properly they'd know that the technology is not ready and one big category of those things is very nuanced human judgment about human behavior so parole board hearing would be a good example of that there's no way ai's ready yet to kind of model the balance of factors that experience say parole board member is balancing up across society how do you quantify those things mathematically or in data and then if you add in a further thing which is how critical that decision is either way then all those things combined mean to me that it's not something that ai should be used for certainly not to make the decision at the level ais at the moment i think it's fine to use it as an analysis tool to triage like a medical image but the doctor needs to make the decision in our episode on language models we talk about some of the more concerning potential uses of them is there anything that deep mind can do to really prevent some of those nefarious purposes of language models like spreading misinformation we're doing a bunch of research ourselves on you know the issues with language models i think there's a long way to go like in terms of building analysis tools to interpret what these systems are doing and why they're doing it i think this is a question of understanding why are they putting this output out and then how can you fix those issues like biases fairness and what's the right way to do that of course you want truth at the heart of it but then there are subjective things where people from different say political persuasions have a different view about something what are you going to say is the truth at that point so then it sort of impinges on like well what does society think about that and then which society are you talking about and these are really complex questions and because of that this is an area i think that we should be proceeding with caution in terms of deploying these systems in products and things how do you mitigate the impact that ai is having on the environment is there just a danger of building larger and larger and larger energy-hungry systems and having a negative impact yeah i mean we have to consider this i think that ai systems are using a tiny sliver of the world's energy usage even the big models compared to watching videos online all of these things are using way more computers and bandwidth second thing is that actually most of the big data centers now especially things like google are pretty much 100 carbon neutral but we should continue that trend to become fully green data centers and then of course you have to look at the benefits of what you're trying to build so let's say a healthcare system or something like that relative to energy usage most ai models are hugely net positive and then the final thing is we've proven is that actually building the ai models can then be used you know to optimize the energy systems itself so for example one of the best applications we've had of our ai systems is to control the cooling in data centers and save like 30 of the energy they use you know that saving is way more than we've ever used for all of our ai models put together probably so it's an important thing to bear in mind to make sure it doesn't get out of hand but i think right now i think that particular worries are sort of slightly over hyped while demis and his colleagues at deepmind are thinking hard about what could go wrong when ai is deployed in the real world what really shone through during our conversation was demus's faith in the idea that ultimately building ai and agi will be a net positive for the whole of society if you look at the challenges that confront humanity today climate sustainability inequality the natural world all of these things are in my view getting worse and worse and there's going to be new ones coming soon down the line like access to water and so on which i think are going to be really major issues in the next 50 years and if there wasn't something like ai coming down the road i would be extremely worried for our ability to actually solve these problems but i'm optimistic we are going to solve those things because i think ai is coming and i think it will be the best tool that we've ever created in some ways it's hard not to be drawn in by demise's optimism to be enthused by the tantalizing picture he paints of the future and it's becoming clearer that there are serious benefits to be had as this technology matures but as research swells behind that single north star of agi it's also evident that this progress comes with its own serious risks too there are technical challenges that need resolving but ethical and social challenges too that can't be ignored and much of that can't be resolved by ai companies alone they require a broader societal conversation one which i hope at least in some small way is fueled by this podcast but i'm struck most of all by how far the field has come in such a short space of time at the end of the last season we were talking enthusiastically about ai playing atari games and go and chess and now all of a sudden as these ideas have found their feet we can reasonably look forward to ai making a difference in drug discovery and nuclear fusion and understanding the genome and i do wonder what new discoveries might await when we meet again deepmind the podcast has been a whistle-down production the series producer is dan hardoon with production support from jill ateneku the editor is david prest sound design is by emma barnaby and nigel appleton is the sound engineer the original music for this series was specially composed by elainey shaw and what wonderful music it was i'm professor hannah fry thank you for listening you\n"