Jeff Hawkins - The Thousand Brains Theory of Intelligence _ Lex Fridman Podcast #208

The Nature of Intelligence: A Conversation with Jeff Hawkins

In our latest conversation, we had the pleasure of sitting down with renowned entrepreneur and scientist, Jeff Hawkins. Hawkins is best known for his work on the PalmPilot and his development of the first smartphone, the PDA (Personal Digital Assistant). He's also a pioneer in artificial intelligence and neuroscience, with a focus on understanding the nature of human intelligence.

As we began our conversation, Hawkins turned to the topic of interacting with humans. "I think if you think about interacting with humans," he said, "it's gonna have to be engineered in there." This sentiment is echoed in his work on artificial intelligence, where he's sought to create machines that can learn and adapt like humans. However, Hawkins also acknowledges the complexity of human nature, noting that "the darker sides of human nature or the better angels of our nature" will likely play a significant role in shaping our interactions with technology.

From a reinforcement learning perspective, Hawkins mused on whether the "darker sides" or "better angels" of human nature will prevail. While he doesn't have a definitive answer, he expressed optimism that love and compassion will ultimately win out. This sentiment is reflected in his work on developing intelligent machines that can help solve some of humanity's most pressing problems, such as climate change.

Hawkins also spoke about the inevitability of progress, noting that "all you can do is accelerate the inevitable." As an individual, he hopes to have made a positive impact on the world by contributing to the development of intelligent machines. "You can't create a new reality that it wasn't gonna happen," he said. Instead, he believes that his work has helped to speed up progress towards a better future.

In terms of legacy, Hawkins hoped that people reading about his work in 100 years would remember him as someone who helped accelerate human progress. He envisioned a future where intelligent machines are used to solve some of the world's most pressing problems, and where humanity is living in a way that's compatible with the carrying capacity of the earth.

As we discussed the nature of intelligence and its potential impact on society, Hawkins offered some words of wisdom from Albert Camus: "An intellectual is someone whose mind watches itself." This phrase resonated with Hawkins, who sees himself as both the watcher and the watched. He believes that this tension between observer and observed can be a powerful tool for growth and understanding.

Throughout our conversation, Hawkins demonstrated his passion for exploring the nature of intelligence and its potential impact on humanity. His insights offer a unique perspective on the role of technology in shaping our future, and serve as a reminder of the importance of optimism and progress in the face of uncertainty.

As we concluded our conversation, Hawkins expressed gratitude to the listeners of our podcast. "It's great," he said, "to have the most interesting people in me aside." He also praised the work of other individuals who are contributing to the development of intelligent machines, and acknowledged the importance of his own work as part of a broader effort to create a better future.

As we leave you with this conversation, we hope that you've been inspired by Hawkins' thoughts on the nature of intelligence and its potential impact on society. We invite you to check out Codecademy, Bio Optimizers, ExpressVPN, and Blinkist – all of which offer resources and tools that can help you accelerate your own progress towards a better future.

"WEBVTTKind: captionsLanguage: enthe following is a conversation with jeff hawkins a neuroscientist seeking to understand the structure function and origin of intelligence in the human brain he previously wrote the seminal book on the subject titled on intelligence and recently a new book called a thousand brains which presents a new theory of intelligence that richard dawkins for example has been raving about calling the book quote brilliant and exhilarating i can't read those two words and not think of him saying it in his british accent quick mention of our sponsors codecademy bio optimizers expressvpn a-sleep and blinkist check them out in the description to support this podcast as a side note let me say that one small but powerful idea that jeff hawkins mentions in his new book is that if human civilization were to destroy itself all of knowledge all our creations will go with us he proposes that we should think about how to save that knowledge in a way that long outlives us whether that's on earth in orbit around earth or in deep space and then to send messages that advertise this backup of human knowledge to other intelligent alien civilizations the main message of this advertisement is not that we are here but that we were once here this little difference somehow was deeply humbling to me that we may with some non-zero likelihood destroy ourselves and that an alien civilization thousands or millions of years from now may come across this knowledge store and they would only with some low probability even notice it not to mention be able to interpret it and the deeper question here for me is what information in all of human knowledge is even essential does wikipedia capture it or not at all this thought experiment forces me to wonder what are the things we've accomplished and are hoping to still accomplish that will outlive us is it things like complex buildings bridges cars rockets is it ideas like science physics and mathematics is it music and art is it computers computational systems or even artificial intelligence systems i personally can't imagine that aliens wouldn't already have all of these things in fact much more and much better to me the only unique thing we may have is consciousness itself and the actual subjective experience of suffering of happiness of hatred of love if we can record these experiences in the highest resolution directly from the human brain such that aliens will be able to replay them that is what we should store and send as a message not wikipedia but the extremes of conscious experiences the most important of which of course is love this is the lex friedman podcast and here is my conversation with jeff hawkins we previously talked over two years ago do you think there's still neurons in your brain that uh remember that conversation that uh remember me and got excited like there's a lex neuron in your brain that just like finally has a purpose i do remember our conversation or i have some memories of it and i formed additional memories of you in the meantime um i wouldn't say there's a neuron or a neurons in my brain that know you there are synapses in my brain that have formed that reflect my knowledge of you and the model i have of you in the world and whether the exact same synapses were formed two years ago it's hard to say because these things come and go all the time but we know from one thing to know about brains is that when you think of things you often erase the memory and rewrite it again so yes but i have a memory of you and i have that's instantiated in synapses there's a simpler way to think about it like so you have we have a model of the world in your head and that model is continually being updated i updated this morning you offered me this water you said it was from the refrigerator i remember these things and so we and so the model includes where we live the places we know the words the objects in the world it's a monstrous model and it's constantly being updated and people are just part of that model so we're animals or other physical objects so our events we've done so um it's there's no special in my mind special place for the memories of humans i mean obviously i know you know i know a lot about my wife um but and friends uh and so on but it's not like a special place for humans over here but we model everything and we model other people's behaviors too so if i said you're a copy of your mind in my mind it's just because i know how humans i've learned how humans behave and um and i've learned some things about you and that's part of my world model well i just also mean the collective intelligence of the human species i wonder if there's something fundamental to the brain that enables that so modeling other humans with their ideas you're actually jumping into a lot of big topics like collective intelligence is a separate topic that a lot of people like to talk about we can talk about that uh but um and so that's interesting like you know we're not just individuals we live in society and so on but from our research point of view and so again let's just talk we study the neocortex it's a sheet of neural tissue it's about 75 of your brain it runs on this very repetitive algorithm it's a very repetitive circuit and so you can apply that algorithm to lots of different problems but it's all underneath it's the same thing we're just building this model so from our point of view we wouldn't look for these special circuit someplace buried in your brain that might be related to other you know understanding of the humans it's more like you know how do we build a model of anything how do we understand anything in the world and humans are just another part of the things we understand so there's nothing uh there's nothing to the brain that knows the emergent phenomena of collecting the intelligence well i certainly know about that i've heard the terms i've read no but that's right right well okay right as an idea well i think we have language which is is sort of built into our brains and that's a key part of collective intelligence so there are some you know prior assumptions about the world we're going to live in when we're born we're not just a blank slate um and so you know did we evolve to take advantage of those situations yes but again we study only part of the brain the neocortex there's other parts of the brain are very much involved in societal interactions and human emotions and um and how we interact and even societal um issues about you know how we are how we interact with other people when we support them when we're greedy and things like that i mean certainly the brain is a great place where to study intelligence i wonder if it's the fundamental uh atom of intelligence well i would say it's it's it's absolutely an essential component even if you believe in collective intelligence as um hey that's where it's all happening that's what we need to study which i don't believe that by the way i think it's really important but i don't think that is the thing um but even if you do believe that then you have to understand how the brain works in doing that um it's you know it's more like we are intelligent and we are intelligent individuals and together we are much more magnified our intelligence we can do things that we couldn't do individually but even as individuals we're pretty damn smart and we can model things and understand the world and interact with it so um to me if you're going to start some place you need to start with the brain then you could say well how do brains interact with each other and what is the nature of language and how do we share models that i've learned something about the world how do i share it with you which is really what you know sort of communal intelligence is i know something you know something we've had different experiences in the world i've learned something about brains maybe i can impart that to you you've learned something about you know whatever physics and you can part that to me but it also comes down to even just the epistemological question of well what is knowledge and how do you represent it in the brain right and it's not that's where it's going to reside right or in our writings it's obvious that human collaboration human interaction is how we build societies right but some of the things you talk about and work on some of those elements of what makes up an intelligent entity is there with a single person oh absolutely i mean it'd be we can't deny that the brain is the core element here in in uh at least i can't i think it's obvious the brain is the core element in all theories of intelligence uh it's where knowledge is represented it's where knowledge is created we interact we share we build upon each other's work but uh without a brain you'd have nothing you know there would be no intelligence without brains and so um so that's where we start i got into this field because i just was curious as to who i am you know how you know how do i think what's going on in my head when i'm what i'm thinking what does it mean to know something you know i can ask what it means for me to know something independent of how i learned it from you or from someone else or from society so what does it mean for me to know that i have a model of you in my head what does it mean to know i know what this microphone does and how it works physically even though i can't see it right now how do i know that what does it mean how the neurons do that at the fundamental level of neurons and synapses and so on those are really fascinating questions and uh i'm happy to be just happy to understand those if i could so in your um in your new book you talk about our brain our mind as being made up of many brains uh so the book is called the thousand brains a thousand brain theory of intelligence what is the key idea of this book uh the book has three sections and it has sort of maybe three big ideas so the first section is all about what we've learned about the neurocortex and that's the thousand brains theory just did we complete the picture the second section is all about ai and the third section is about the future of humanity so the thousand brains theory the the big idea there if i had to summarize into one big idea is that we think of the the brain the neocortex is learning this model of the world but what we learned is actually there's tens of thousands of independent modeling systems going on and so each what we call a column in the cortex is about 150 000 of them is a complete modeling system so it's a collective intelligence in your head in some sense so the thousand brains theory says well where do i have knowledge about you know this coffee cup where is the model of this cell phone it's not in one place it's in thousands of separate models that are complementary and they communicate with each other through voting so this idea that we have we feel like we're one person you know that's our experience we can explain that but reality there's lots of these like almost like little brands like but they're they're sophisticated modeling systems about 150 000 of them in each of the human brain and that's a totally different way of thinking about how the neural cortex is structured than we or anyone else thought of even just five years ago so you mentioned you started this journey and just looking in the mirror trying to understand who you are so if you have many brains who are you then so it's interesting we have a singular perception right you know we think oh i'm just here i'm looking at you but it's it's composed of all these things like there's sounds and there's and there's uh this vision and there's touch and all kinds of inputs yeah we have the singular perception and what the thousand brain theory says we have these models that are visual models we have a lot of models of auditory models models of toxin models and so on but they vote and so um they send in the cortex you can think about these columns as that like little grains of rice 150 000 stacked next to each other and each one is its own little modeling system but they have these long-range connections that go between them and we call those voting connections or voting neurons um and so the different columns try to reach the consensus like what am i looking at okay you know each one has some ambiguity but they come to a consensus oh there's a water bottle i'm looking at um we are only consciously able to perceive the voting we're not able to perceive anything that goes on under the hood so the voting is what we're we're aware of the results of the vote yeah the velocity well it's it's you can imagine it this way we were just talking about eye movements a moment ago so as i'm looking at something my eyes are moving about three times a second and with each movement a completely new input is coming into the brain it's not repetitive it's not shifting it around it's completely new i'm totally unaware of it i can't perceive it but yet if i looked at the neurons in your brain they're going on and off i don't know but the voting neurons are not the voting neurons are saying you know we all agree even though i'm looking at different parts of this is a water bottle right now and that's not changing and it's in some position and and pose relative to me so i have this perception of the water bottle about two feet away from me at a certain pose to me um that is not changing that's the only part i'm aware of i can't be aware of the fact that the inputs from the eyes are moving and changing and all this others happening so these long range connections are the part we can be conscious of the individual activity in each column is doesn't go anywhere else it doesn't get shared anywhere else it doesn't there's no way to extract it and talk about it or extract it and even remember it to say oh yes i can recall that um so but these long-range connections are the things that are accessible to language and to our you know it's like the hippocampus or our memories you know our short-term memory systems and so on so we're not aware of 95 or maybe it's even 98 of what's going on in your brain we're only aware of this sort of stable somewhat stable voting outcome of all these things that are going on underneath the hood so what would you say is the basic element in the thousand brains theory of intelligence of intelligence like what's the atom of intelligence when you think about it is it the individual brains and then what is a brain well let's let's can we just talk about what intelligence is first and then and then we can talk about the elements are so in my in my book intelligence is the ability to learn a model of the world so to build internal to your head a model that represents the structure of everything you know to know what this is a table and that's a coffee cup and this is a gooseneck lamp and all this to know these things i have to have a model in my head i just don't look at them and go what is that i already have internal representations of these things in my head and i had to learn them i wasn't born of any of that knowledge you were you know we have some lights in the room here i you know that's not part of my evolutionary heritage right it's not in my genes so um we have this incredible model and the model includes not only what things look like and feel like but where they are relative to each other and how they behave i've never picked up this water bottle before but i know that if i took my hand on that blue thing and i turn it it'll probably make a funny little sound as the little plastic things detach and then it'll rotate and it'll look a certain way it'll come off how do i know that right because i have this model in my head so the essence of intelligence as our ability to learn a model and the more sophisticated our model is the smarter we are not that there is a single intelligence because you can know about you know a lot about things that i don't know and i know about things you don't know and we can both be very smart but we both learn the model of the world through interacting with it so that is the essence of intelligence then we can ask ourselves what are the mechanisms in the brain that allow us to do that and what are the mechanisms of learning not just the neural mechanisms what is the general process but how we learn a model so that was a big insight for us it's like what are the what is the actual things that how do you learn this stuff it turns out you have to learn it through movement um you can't learn it just by that's how we learn we learn through movement we learn um so you build up this model by observing things and touching them and moving them and walking around the world and so on so either you move or the thing moves somehow yeah you obviously can learn things just by reading a book something like that but think about if i were to say oh here's a new house yeah i want you to learn you know what do you do you have to walk you have to walk from room to the room you have to open the doors look around see what's on the left what's on the right as you do this you're building a model in your head it's just that's what you're doing you can't just sit there and say i'm going gonna grock the house no you know or you could you don't even want to sit there and read some description of it right yeah you literally physically interactive the same with like a smartphone if i want to learn a new app i touch it and i move things around i see what happens when i when i do things with it so that's the basic way we learn in the world and by the way when you say model you mean something that can be used for prediction in the future it's it's used for prediction and for behavior and planning right um and does a pretty good job in doing so yeah here's the way to think about the model a lot of people get hung up on this so um you can imagine an architect making a model of a house right so there's a physical model that's small and why do they do that well we do that because you can imagine what it would look like from different angles you could say okay look at them here look in there and you can also say well how how far to get from from the garage to the to the swimming pool or something like that right you can imagine looking at this you can say what would be the view from this location so we built these physical models to let you imagine the future and imagine that behaviors now we can take that same model and put it in a computer so we now today they'll build models of houses and a computer and they and they do that using a set of um we'll come back to this term in a moment reference frames but eventually you assign a reference frame for the house and you assign different things for the house in different locations and then the computer can generate an image and say okay this is what it looks like in this direction the brain is doing something remarkably similar to this surprising um it's using reference frames it's building these it's similar to a model in a computer which has the same benefits of building a physical model it allows me to say what would this thing look like if it was in this orientation what would likely happen if i push this button i've never pushed this button before or how would i accomplish something i want to i want to um convey a new idea i've learned how would i do that i can imagine in my head well i could talk about it i could write a book i could do some podcasts i could um you know maybe tell my neighbor you know and i can imagine the outcomes of all these things before i do any of them that's what the model lets you do it let's just plan the future and imagine the consequences of our actions prediction you asked about prediction prediction is not the goal of the model prediction is an inherent property of it and it's how the model corrects itself so prediction is fundamental to intelligence it's fundamental to building a model and the model's intelligent and let me go back and be very precise about this prediction you can think of prediction two ways one is like hey what would happen if i did this that's the type of prediction um that's a key part of intelligence but using predictions like oh what's this this is this water bottle gonna feel like when i pick it up you know and that doesn't seem very intelligent but the way to think one way to think about intelligence prediction is it's a way for us to learn where our model is wrong so if i picked up this water bottle and it felt hot i'd be very surprised or if i picked up was very light it would be very i'd be surprised or if i turned this top and it didn't i had to turn the other way i'd be surprised and so almost might have a prediction like okay i'm gonna do it i'll drink some water i'm okay okay do this there it is i feel opening right what if i had to turn it the other way or what if it it split in two then i say oh my gosh i i misunderstood this i didn't have the right model of this thing my attention would be drawn to i'll be looking at it going well how the hell did that happen you know why did it open up that way and i would update my model by doing it just by looking at it and playing around with that update and say this is a new type of water bottle but you so you're talking about sort of uh complicated things like a water bottle but this also applies for just basic vision just like seeing things it's almost like a precondition of just perceiving the world is predicting it's just everything that you see is first passed through your prediction everything you see and feel in fact this this is the insight i had uh back in the late 80s uh and excuse me early 80s and um another people reach the same idea is that every sensory input you get not just vision but touch and hearing you have an expectation about it and um a prediction sometimes you can pick very accurately sometimes you can't i can't predict what next word is going to come out of your mouth but as you start talking about better and better predictions and if you talk about some topics i'd be very surprised so i have this sort of background prediction that's going on all the time for all my senses again the way i think about that is this is how we learn it's it's more about how we learn it's the test of our understanding our predictions are our test did is this really a water bottle if it is i shouldn't see you know a little finger sticking out the side and if i saw a little finger stick and i was like what the hell is going on you know that's not normal um i mean that's fascinating that just let me linger on this for a second i it really honestly feels that prediction is fundamental to everything uh to the way our mind operates to intelligence so like it's just a different way to see intelligence which is like everything starts at prediction and prediction requires a model you can't predict something unless you have a model of it right but the action is prediction it's like the the thing the model does is prediction and but it also yeah and you but you can then extend it to things like uh what would happen if i took this today i went and did this what would be like that or how you can extend predictions like oh i want to get a promotion at work um what action should i take and you can say if i did this i predict what might happen if i spoke to someone i predict what might happen so it's not just low level predictions yeah it's all prediction it's all predictions like this black box so you can ask basically any question low level or highlight so we start off with that observation it's all it's like this non-stop prediction and i write about this in the book about and then we ask how do neurons actually make predictions physically like what does the neuron do when it makes a prediction and um what the neural tissue does when it makes predictions and then we ask what are the mechanisms by how we build a model that allows you to make prediction so we started with prediction as sort of the fundamental research agenda if in some sense like and say well we understand how the brain makes predictions we'll understand how it builds these models and how it learns and that's core of intelligence so it was like it was the key that got us in the door to say that is our research agenda understand predictions so in this whole process where does intelligence originate would you say so it if we look at things that are much less intelligent to humans and you start to build up a human the process of evolution where is this magic thing that uh has a prediction model or a model that's able to predict that starts to look a lot more like intelligence is there a place where richard dawkins wrote an introduction to your uh to your book an excellent introduction i mean it puts a lot of things into context and it's funny just looking at parallels for your book and darwin's origin of species so darwin wrote about the origin of species so what is the origin of intelligence well we have a theory about it and it's just that it's a theory theory goes as follows as soon as living things started to move they're not just floating in sea they're not just a plant you know grounded some place as soon as they started the move there was an advantage to moving intelligently to moving in certain ways and there's some very simple things you can do you know bacteria or single cell organisms can move towards a source of gradient of food or something like that but an animal that might know where it is and know where it's been and how to get back to that place or an animal that might say oh there was a source of food someplace how do i get to it or there was a danger how do i get to there was a mate how do i get to them um there was a big evolution advantage to that so early on there was a pressure to start understanding your environment like where am i and where have i been and what happened in those different places so we still have this neural mechanism in our brains um it's in in the in the mammals it's in the hippocampus and internal cortex these are older parts of the brain um and these are very well studied um we build a map of the of our environment so these neurons in these parts of the brain know where i am in this room and where the door was and things like that so a lot of other mammals have this all mammals have this right and almost any any animal that knows where it is and get around must have some mapping system must have some way of saying i've learned a map of my environment i have hummingbirds in my backyard and they and they go the same places all the time they have to they must know where they are they just know where they are when they're they're not just randomly flying around they know they know particular flowers they come back to so we all have this and it turns out it's very tricky to get neurons to do this to build a map of an environment it's just and so we now know there's this these famous studies that's still very active about place cells and grid cells and these other types of cells in the older parts of the brain and how they build these maps of the world it's really clever it's obviously been under a lot of evolutionary pressure over a long period of time to get good at this so animals not know where they are what we think has happened uh and there's a lot of evidence to digest this is that that mechanism we learn to map like a space is was repackaged the same type of neurons was repackaged into a more compact form and that became the cortical column and it was it was in some sense genericized if that's a word it was turned into a very specific thing about learning maps of environments to learning maps of anything learning a model of anything not just your space but coffee cups and so on and it got sort of repackaged into a more compact version a more universal version and then replicate it so the reason we're so flexible is we have a very generic version of this mapping algorithm and we have 150 000 copies of it sounds a lot like the progress of deep learning how so uh so take neural networks that seem to work well for a specific task compress them and multiply it by a lot and then you just stack them on top of it it's like the story of transformers and uh yeah but interesting networks they end up you're replicating an element but you still need the entire network to do anything right here what what's going on each individual element is a complete learning system this is why i can take a human brain cut it in half and it still works it's it's pretty amazing it's fundamentally distributed it's fundamentally distributed complete modeling systems so but that's that's our story we like to tell i i i would guess it's it's likely largely right um but you know it's there's a lot of evidence supporting that story this evolutionary story the thing which brought me to this idea is that the human brain got big very quickly so that that led to the proposal a long time ago that well there's this common element just instead of creating new things it just replicated something we also are extremely flexible we can learn things that we had no history about right and so that tells it that the learning algorithm is very generic it's very kind of universal because it's it doesn't assume any prior knowledge about what it's learning and so you combine those things together and you say okay well how did that come about where did that universal algorithm come from it had to come from something that wasn't universal it came from something that was more specific and so anyway this led to our hypothesis that you would find grid cells and place cell equivalents in the neocortex and when we first published our first papers on this theory we didn't know of evidence for that it turns out there was some but we didn't know about it uh and since then um so then we became aware of evidence for grid cells in parts of the neural cortex and then now there's been new evidence coming out there's some interesting papers that came out just january of this year so our one of our predictions was if this evolutionary hypothesis is correct we would see grid cell place cell equivalents cells that work like them through every column in the near cortex and that's starting to be seen what does it mean that uh why is it important that they're present because it tells us well we're asking about the evolutionary origin of intelligence right so our theory is that these columns in the cortex are working on the same principles they're modeling systems and it's hard to imagine how neurons do this and so we said hey it's really hard to imagine how neurons could learn these models of things we can talk about the details of that if you want but let's um but there's this other part of the brain we know that learns models of environments so could that mechanism to learn to model this room be used to learn a model the water bottle is it the same mechanism so we said it's much more likely the brain is using the same mechanism which case it would have these equivalent cell types so it's basically the whole theory is built on the idea that um these columns have reference frames and they're learning these models and these these grid cells create these reference frames so it's it's basically the major in some sense the major predictive part of this theory is that we will find these equivalent mechanisms in each column in the near cortex which tells us that's that that that's what they're doing they're learning these sensory motor models of the world so just we're pretty confident that would happen but now we're seeing the evidence so the evolutionary process nature does a lot of copy and paste and see what happens yeah yeah there's no direction to it but but um it just found out like hey if i took this these elements and and made more of them what happens and let's hook them up to the eyes and let's look up the ears and and um and that seems to work pretty well yeah like for us again just to take a quick step back to our conversation of collective intelligence do you sometimes see that as just another copy and paste aspect is copying pasting these uh brains and humans and making a lot of them and then creating uh social structures that then almost operates as a single brain uh i wouldn't have said it but you said it sounded pretty good so to you the brain is fundamental is uh is like uh is its own thing right i mean our goal is to understand how the neural cortex works we can argue how essential that is to understand a human brain because it's not the entire human brain you can argue how essential that is to understanding human intelligence you can argue how essential it is to um to uh you know a sort of communal intelligence um i i'm not i didn't our goal was to understand the neocortex yeah so what is the neural cortex and where does it fit in um the various aspects of what the brain does like how important is it to you well obviously again we i mentioned again in the beginning it's it's it's about 70 to 75 of the volume of a human brain so it's you know it dominates our brain in terms of size not in terms of number of neurons but in terms of size size isn't everything jeff i know but it's it's nothing it's nothing it's not that we know that all high-level vision hearing and touch happens in the air context we know that all language occurs and is understood in the neurocortex whether that's spoken language written language sign language with language of mathematics language of physics music math you know we know that all high-level planning and thinking occurs in the new york cortex if i were to say you know what part of your brain designed a computer and understands programming and and creates music it's all the neural cortex so then that's kind of undeniable fact uh if but then there's other parts of our brain are important too right our emotional states uh our body regulating our body um so the way i like to look at it is you know could you can you understand the neocortex about the rest of the brain and some people say you can't and i think absolutely you can it's not that they're not interacting but you can understand them can you understand the neocortex without understanding the emotions of fear yes you can you can understand how the system works it's just a modeling system i make the analogy in the book that it's it's like a map of the world and how that map is used depends on who's using it so how our map of our world in our neocortex how we how we manifest as a human depends on the rest of our brain what are our motivations you know what are my desires am i a nice guy or not a nice guy am i a cheater or a you know or not a cheater um uh you know how important different things are in my life so um so but the new projects can be understood on its own um and and i say that as a neuroscientist i know there's all these interactions and i want to say i don't know them and we don't think about them but from a layperson's point of view you can say it's a modeling system i don't tend to think too much about the communal aspect of intelligence which you brought a number of times already um so that's not really been my concern i just wonder if there's a continuum from the origin of the universe like this com pockets of complexities that form yeah living organisms i wonder if if we're just if you look at humans we feel like we're at the top but i wonder if there's like just where everybody probably every living type pocket of complexity is probably thinks they're the uh pardon the french they're the shit yeah they're they're they're at the top of the parent well if they're thinking um well then then what is thinking what the all right in this sense the whole point is in their sense of the world they their sense is that they're at the top of it i think what is it turtle but you're you're you're bringing up you know the the problems of complexity and complexity theory are you know it's a huge interesting problem in science um and you know i think we've made surprisingly little progress in understanding complex systems right in general um and so you know the santa fe institute was founded to to study this and and even the scientists there will say it's really hard we haven't really been able to figure out exactly you know that science isn't really congealed yet we're still trying to figure out the basic elements of that science uh what you know where does complexity come from and what is it and how you define it whether it's dna creating bodies or phenotypes or if it's individuals creating societies or ants and you know markets and so on it's it's a very complex thing i'm not a complexity theorist person right um and i i think they ask well the brain itself is a complex system so can we understand that um i think we've made a lot of progress understanding how the brain works so but i haven't brought it out to like oh well where are we on the complexity spectrum you know it's like um that's a great question i'd prefer for that answer to be we're not special it seems like if we're honest most likely we're not special so if there is a spectrum we're probably not in some kind of significant place there's one thing we could say that we are special and and again only here on earth i'm not saying i'm bad is that if we think about knowledge what we know um we clearly human brains have um the only brains that have a certain types of knowledge we're the only brains on on this earth to understand uh what the earth is how old it is that the universe is a picture as a whole the only organisms understand dna and the origins of you know of species uh no other species on on this planet has that knowledge so if we think about i like to think about you know one of the endeavors of humanity is to understand the universe as much as we can um i think our species is further along in that undeniably um whether our theories are right or wrong we can debate but at least we have theories you know we we know that what the sun is and how it's fusion is and how what black holes are and you know we know general theory relativity and no other animal has any of this knowledge so in that sense that we're special uh are we special in terms of the the hierarchy of complexity in in the universe probably not can we look at a neuron yeah you say that prediction happens in the neuron what does that mean so neuron traditionally seen as the basic element of the the brain so we i mentioned this earlier that prediction was our research agenda yeah we said okay um how does the brain make a prediction like i i'm about to grab this water bottle and my brain is predicting what i'm going to feel um on all my parts of my fingers if i felt something really odd on any part here i notice it so my brain is predicting what it's going to feel as i grab this thing so what is that how does that manifest itself in neural tissue right we got brains made of neurons and there's chemicals and there's neurons and there's spikes and the connect you know where where is the prediction going on and one argument could be that well when i'm predicting something um a neuron must be firing in advance it's like okay this neuron represents what you're going to feel and it's firing it's sending a spike and certainly that happens to some extent but our predictions are so ubiquitous that we're making so many of them which we're totally unaware of just the vast majority we have no idea that you're doing this um that it wasn't really we were trying to figure how could this be where where is these where are these happening right and i won't walk you through the whole story unless you insist upon it but we came to the realization that most of your predictions are occurring inside individual neurons especially these the most common are in the pyramidal cells and there are there's a property of neurons we everyone knows or most people know that a neuron is a cell and it has this spike called an action potential and it sends information but we now know that there's these spikes internal to the neuron they're called dendritic spikes they travel along the branches of the neuron and they don't leave the neuron they're just internal only there's far more dendritic spikes than there are action potentials far more they're happening all the time and what we came to understand that those dendritic spikes the ones that are occurring are actually a form of prediction they're telling the neuron the neuron is saying i expect that i might become active shortly and that internal so the internal spike is a way of saying you're going to you might be generating external spikes soon i predicted you're going to become active and and we we've we've we wrote a paper in 2016 which explained and how this manifests itself in neural tissue and how it is that this all works together but the vast ma we think it's there's a lot of evidence supporting it um so we that's where we think that most of these predictions are internal that's why you can't be per their internal neuron you can't perceive them from understanding the the prediction mechanism of a single neuron do you think there's deep insights to be gained about the prediction capabilities of the mini brains within the bigger brain and the brain oh yeah yeah yeah so having a prediction side of the individual neuron is not that useful you know what so what um the way it manifests itself in neural tissue is that when a neuron a neuron emits these spikes or a very singular type event if a neuron is predicting that it's going to be active it makes it spike very a little bit sooner just a few milliseconds sooner than it would have otherwise it's like i give the analogy in the book there's like a sprinter on a on a starting blocks in a race and if someone says get ready set you get up and you're ready to go and then when your race starts you get a little bit earlier start so that it's that that ready set is like the prediction and the neuron's like ready to go quicker and what happens is when you have a whole bunch of neurons together and they're all getting these inputs the ones that are in the predictive state the ones that are anticipating to become active if they do become active they they happen sooner they disable everything else and it leads to different representations in the brain so you have to it's not isolated just to the neuron the prediction occurs within the neuron but the network behavior changes so what happens under different predictions different inputs have different representations so how i what i predict um it's going to be different under different contexts you know what my input will be is different under different context so this is this is a key level theory how this works so the theory of the thousand brains if you were to count the number of brains how would you do it the thousand main theory says that basically every cortical column in the in your neurocortex is a complete modeling system and that when i ask where do i have a model of something like a coffee cup it's not in one of those models it's in thousands of those models there's thousands of models of coffee cups that's what the thousand brains there's a voting mechanism then there's a voting mechanism which leads which is the thing you're which you're conscious of which leads to your singular perception um that's why you perceive something so that's the thousand brains theory the details how we got to that theory um are complicated it wasn't you just thought of it one day and one of those details is we had to ask how does a a model make predictions and we've talked about just these predictive neurons that's part of this theory it's like saying oh it's a detail but it was like a crack in the doors like how are we going to figure out how these neurons build do this you know what is going on here so we just looked at prediction as like well we know that's ubiquitous we know that every part of the cortex is making predictions therefore whatever the predictive system is it's going to be everywhere we know there's a gazillion predictions happening at once so let's see if we can start teasing apart you know ask questions about you know how could neurons be making these predictions and that sort of built up to now what we have the thousand brains theory which is complex you know it's just some i can state it simply but we just didn't think of it we had to get there step by step very it took years uh to get there and where does uh reference frames fit in so yeah okay so again a reference frame i mentioned um earlier about the you know a model of a house and i said if you're going to build a model of a house in a computer they have a reference frame and you can then reference them like cartesian coordinates like x y and z axes so i can say oh i'm going to design a house i can say well the the front door is at this location xyz and the roof is at this location xyz and so on that's a type of reference frame so it turns out for you to make a prediction and then i walk you through the thought experiment in the book where i was predicting what my finger was going to feel when i touched the coffee cup it was a ceramic coffee cup but this one will do um and what i realized is that to make a prediction with my finger's going to feel like it's just going to feel different than this which would feel different if i touch the hole or the thing on the bottom make that prediction the cortex needs to know where the finger is the tip of the finger relative to the coffee cup and exactly relative to the coffee cup and to do that i have to have a reference frame for the coffee up it has to have a way of representing the location of my finger to the coffin up and then we realize of course every part of your skin has to have a reference frame relative things to touch and then we did the same thing with vision but so the idea that a reference frame is necessary to make a prediction when you're touching something or when you're seeing something and you're moving your eyes you're moving your fingers it's just a requirement to know what to predict if i have a if i have a structure i'm going to make a prediction i have to i have to know where it is i'm looking or touching it so then we say well how do neurons make reference frames it's not obvious you know xyz coordinates don't exist in the brain it's just not the way it works so that's when we looked at the older part of the brain the hippocampus and the antorano cortex where we knew that in that part of the brain there's a reference frame for a room or reference name for environment remember i talked earlier about how you could know make a map of this room so we said oh um that they are implementing reference frames there so we knew that reference frames needed to exist in every cortical column and so that was a deductive thing we just deduced it has to go so you take the old mammalian ability to know where you are in a particular space and you start applying that to higher and higher levels yeah you first you apply it to physical like where your finger is so here's what i think about it the old part of the brain says where's my body in this room yeah the new part of the brain says where's my finger relative to this this object yeah where is the a section of my retina relative to this object like where where is i'm looking at one little corner where is that relative to this patch of my retina yeah um and then we take the same thing and apply it to concepts mathematics physics you know humanity whatever you want to think eventually you're pondering your own mortality well whatever but the point is when we think about the world when we have knowledge about the world how is that knowledge organized lex where do you where is it in your head the answer is it's in reference frames so the way i learn the structure of this water bottle where the features are relative to each other when i think about history or democracy or mathematics the same basic underlying structures happening there's reference frames for where the knowledge that you're assigning things to so in the book i go through examples like mathematics and language and politics but the evidence is very clear in the neuroscience the same mechanism that we use to model this coffee cup we're going to use to model high level thoughts your your your demise of the humanity whatever you want to think about it's interesting to think about how different are the representations of those higher dimensional concepts higher level concepts how different the representation there is in terms of reference frames versus spatial but interesting thing it's it's it's a different application but it's the exact same mechanism but isn't there some aspect to uh higher level concepts that they seem to be hierarchical like they just seem to integrate a lot of information into so is our physical objects so take this water bottle uh i'm not particular to this brand but this is a fiji water bottle and it has um a logo and i use this example in my book our company's coffee cup has a logo on it but this object is hierarchical it is it's got like a cylinder and a cap but then has this logo on it and the logo has a word the word has letters the letters of different features and so i don't have to remember i don't think about this so i said oh there's a fiji logo on this water bottle i don't have to go through and say oh what is the fiji logo it's the f and i and the j and i and there's a hibiscus flower and and uh oh it has the pest you know the stamen on it i don't have to do that i just incorporate all of that in some sort of hierarchical representation i say um you know put this logo on this water bottle yeah and and and then the logo has a word and the word has letters all hierarchical just all that stuff is big it's amazing that the brain instantly just does all that yeah the idea that there's there's water it's liquid and the idea that you can uh drink it when you're thirsty the idea that there's brands yeah and then there's like all of that information is instantly like built into the whole thing once you proceed so i wanted to get back to your point about hierarchical representation the world itself is hierarchical right and i can take this microphone in front of me i know inside there's going to be some electronics i know there's going to be some wires and i know there's going to be a little diaphragm that moves back and forth i don't see that but i know it so everything in the world is hierarchical you just go into room it's composed of other components the kitchen has a refrigerator you know the refrigerator has a door the door has a hinge the hinge has screws and pin yeah i mean so anyway the the the modeling system that exists in every cortical column learns the hierarchical structure of objects so it's a very sophisticated modeling system in this grain of rice it's hard to imagine but this grain of ice can do really sophisticated things it's got a hundred thousand neurons in it it's very sophisticated so that same mechanism that can model a water bottle or a coffee cup can model conceptual objects as well it's if that's the beauty of this discovery that this guy vernon mount castle made many many years ago which is that there's there's a single cortical algorithm underlying everything we're doing so so common sense concepts and higher level concepts are all represented in the same way they're set in the same mechanisms yeah it's a little bit like computers right all computers are universal turing machines even the little teeny one that's in my toaster and the big one that's you know running some cloud server or someplace um they're all running on the same principle they can apply different things so the brain is all built on the same principle it's all about learning these models structured models using movement and reference frames and it can be applied to something as simple as a water bottle in a coffee cup and it can be just thinking like what's the future of humanity and you know why do you have a hedgehog on your on your desk i don't know nobody knows i think it's hedgehog that's right it's a hedgehog in the fog it's a russian reference does it give you any inclination or hope about how difficult it is to engineer common sense reasoning so how complicated this is this whole process so looking at the brain is this a marvel of engineering or is it pretty dumb stuff stacked on top of each other over and over can it be both can it be both right i don't know if it can be both because uh if it's an incredible engineering job that means it's v so evolution did a lot of work it uh yeah but then but then it just copied that right so as i said earlier the figuring out how to model something like a space is really hard and evolution had to go through a lot of trick and these these these cells i was talking about these grid cells and place cells they're really complicated this is not simple stuff this neural tissue works on these really unexpected weird mechanisms um but it did it it figured it out but but now you can just make lots of copies of it but then finding yeah so it's a very interesting idea that's a lot of copies of a basic mini brain but the question is how difficult it is to find that mini brain that you can copy and paste uh effectively okay today we know enough to build this i'm sitting here with you know i know the steps we have to go there's still some engineering problems to solve but we know enough and this is not like oh this is an interesting idea we have to go think about it for another few decades no we actually understand in pretty well details so not all the details but most of them so it's complicated but it is an engineering problem so in my company we are working on that we are basically a road map how we do this um it's not going to take decades it's better a few years um optimistically but i think that's possible um it's you know complex things if you understand them you can build them so in which domain do you think it's best to build them are we talking about robotics like uh entities that operate in the physical world that are able to interact with that world are we talking about entities that operate in the digital world are we talking about something more like uh more specific like is done in the uh machine learning community where you look at natural language or computer vision where do you think is easiest it's the first it's the first two more than the third one i would say um again again let's just use computers as an analogy um the pioneers of computing people like john van noyman and um turing they created this thing you know we now call the universal turing machine which is the computer right did they know how it was going to be applied where it was going to be used you know could they envision any of the future no they just said this is like a really interesting computational idea about algorithms and how you can implement them in in a machine and we're doing something similar to that today like we are we are building this sort of universal learning principle that can be applied to many many different things but the the robotics piece of that okay the interactive okay all right let's be specific you can think of this cortical column as this what we call a sensory motor learning system it has the idea that there's a sensor and then it's moving that sensor can be physical it could be like my finger and it's moving in the world it could like my eye and it's physically moving it can also be virtual so it could be um an example would be i could have a system that lives in the internet that that actually samples information on the internet and moves by following links that's that's a sensory motor system so something that echoes the the process of a finger moving along a car but in a very very loose sense it's it's like again learning is inherently about the subbing the structure in the world and discover the structure of the world you have to move through the world even if it's a virtual world even if it's a conceptual world you have to move through it you don't it doesn't exist in one it has some structure to it so here's here's a couple of predictions that getting what you're talking about in humans the same algorithm is does robotics right it moves my arms my eyes my body right um and so in my in the future to me robotics and ai will merge they're not going to be separate fields because they're going to the the the algorithms to really controlling robots are going to be the same algorithms we have in our brand the brain at these sensory motor algorithms i today we're not there but i think that's going to happen and and then so but not all ai systems will have b robotics you can have systems that have very different types of embodiments some will have physical movements some will have non-physical movements it's a very generic learning system again it's like computers the turing machine is it's like it doesn't say how it's supposed to be implemented it doesn't tell how big it is doesn't tell you what you apply it to but it's an interesting it's a computational principle cortical column equivalent is a computational principle is about learning it's about how you learn and it can be applied to a gazillion things this is what i think this is i think this impact of ai is going to be as large if not larger than computing has been in the last century by far because it's it's getting at a fundamental thing it's not a vision system or a learning system it's a it's not a vision system or a hearing system it is a learning system it's a fundamental principle how you learn the structure in the world how you can gain knowledge and be intelligent and that's what the thousand brain says what's going on and we have a particular implementation in our head but doesn't have to be like that at all do you think there's going to be some kind of impact okay let me ask it another way what do uh increasingly intelligent ai systems do with us humans in the following way like how hard is the human in the loop problem how hard is it to to interact the finger on the coffee cup equivalent of having a conversation with a human being so how hard is it to fit into our little human world uh i don't i think it's a lot of engineering problems i don't think it's a fundamental problem i could ask you the same question how hard is for computers to fit into a human world right that i mean that's essentially what i'm asking like how um much are we uh elitist are we as humans like we try to keep out uh systems i don't know i i sure i think i'm not sure that's the right question let's let's look at computers as an analogy computers are million times faster than us they do things we can't understand most people have no idea what's going on when they use computers right how do we integrate them in our society um well they're that we don't think of them as their own entities they're not living things um we don't afford them rights um we uh we rely on them our survival as a seven billion people or something like that is relying on computers now um don't you think that's a fundamental problem that we see them as something we can't we don't give rights to so computers so yeah computers so uh robots computers intelligence systems it feels like for them to operate successfully they would need to have a lot of the elements that we would start having to think about like should this entity have rights i i don't think so i i think it's tempting to think that way personally i don't think anyone hardly anyone thinks that for computers today no one says oh this thing needs a right i shouldn't be able to turn it off or you know if i throw it in the trash can you know and hit it with a sledgehammer i might perform a criminal act no no one thinks that um and now we think about intelligent machines which is where you're going um and and all of a sudden like well now we can't do that i think the basic problem we have here is that people think intelligent machines will be like us they're going to have the same emotions as we do the same feelings as we do what if i can build an intelligent machine that have absolutely could care less about whether it was on or off or destroyed or not it just doesn't care it's just like a map it's just a modeling system it has no desires to live nothing is it possible to create a system that can model the world deeply and not care about whether it lives or dies absolutely no question about it to me that's not 100 percent obvious it's obvious to me so okay we can debate it if you want yeah where does your where does your desire to live come from it's an old evolutionary design i mean we could argue does it really matter if we live or not objectively no right we're all going to die eventually um but evolution makes us want to live evolution makes us want to fight to live evolutionists want to care and love one another and to care for our children and our relatives and our family and and so on and those are all good things but they come about not because we're smart because we're animals that grew up you know the the hummingbird in my backyard cares about its offspring you know the every living thing in some sense cares about you know surviving but when we talk about creating intelligent machines we're not creating life we're not creating evolving creatures we're not creating living things we're just creating a machine that can learn really sophisticated stuff and that machine it may even be able to talk to us but it doesn't it's not going to have a desire to live unless somehow we put it into that system well there's learning right the the thing is but you don't learn to like want to live that's built into you it's wow people like ernest becker argue so okay uh there's the fact the finiteness of life the way we think about it is something we learn uh perhaps so okay yeah and some people decide they don't want to live and some people decide you know you can but the desire to live is built in dna right but i think what i'm trying to get to is uh in order to accomplish goals it's useful to have the urgency of mortality is what the stoics talked about is meditating in your mortality yeah it might be a very useful thing to do to die and have the urgency of death and to realize that to uh conceive yourself as an entity that operates in this world that eventually will no longer be a part of this world and actually conceive of yourself as a conscious entity might be very useful for you to be a system that makes sense of the world otherwise you might get lazy well okay we're going to build these machines right and so we're talking about building ais what but we're we're building the uh uh the the the equivalent of the cortical columns the uh the neocortex the neocortex and the the question is where do they arrive at because we're not hard-coding everything in where uh well well in terms of if you build the neocortex equivalent it will not have any of these desires or emotional states now you can argue that that neocortex won't be useful unless i give it some agency unless i give it some desire unless i give it some motivation otherwise you'll be as lazy and do nothing right you could argue that um but on its own it's not going to do those things it's just not it's not going to sit there and say i understand the world therefore i care to live no it's not going to do that it's just going to say i understand the world why is that obvious to you why why why don't do you think it's okay let me ask it this way do you think it's possible it will at least assign to itself agency and perceive itself in this world as being a conscious entity as a useful way to operate in the world and and to make sense of the world i think intelligent machine could be conscious but that doesn't not again imply any of these um these desires and goals and and that you're worried about it we can i have a we can talk about what it means for the machine to be conscious and by the way not worry about but get excited about it's not necessarily that we should worry about it so i think there's a legitimate problem or not problem a question asked if you build this modeling system what's it gonna model yes right what's it what's its desire what is it what's its goal what are we applying it to right so that's an interesting question um one thing if it and it depends on the application it's not something that inherent to the modeling system it's something we apply to the modeling system in a particular way so if i wanted to make a really smart car it would have to know about driving in cars and what's important in driving in cars it's not going to figure that out on its own it's not going to sit there and say you know i've understood the world and i've decided you know no no no we have to tell it we're going to have to say like so i imagine i make this car really smart it learns about your driving habits it learns about the world and it's just you know is it one day going to wake up and say you know what i'm tired of driving and doing what you want i think i have better ideas about how to spend my time well okay no it's not going to do that part of me is playing a little bit of devil's advocate but part of me is also trying to think through this because i've studied cars quite a bit and i studied pedestrians and cyclists quite a bit and there's part of me that thinks that there needs to be more intelligence than we realize in order to drive successfully that game theory of human interaction seems to require some deep understanding of of human nature that okay when a pedestrian crosses the street there's some sense they they look at a car usually and then they look away there's some sense in which they say i believe that you're not going to murder me you don't have the guts to murder me this is the little dance of pedestrian car interaction yeah is saying i'm going to look away and i'm going to put my life in your hands because i think you're human you're not gonna kill me and then the car in order to successfully operate in like manhattan streets has to say no no no i am going to kill you like a little bit there's a little bit of this weird inkling of mutual murder yeah yeah and that's a dance and then somehow successfully operate through do you think you were born of that did you learn that social interaction uh i think it might have a lot of the same elements that you're talking about which is we're leveraging things we were born with and applying them in the context that uh all right i would i would answer that i would have said that that kind of interaction is learned because you know people in different cultures have different interactions like that if you cross the street in different cities and different around the world they have different ways of interacting i would say that's learned and i would say an intelligent system could learn that too but that does not lead and the intelligent system can understand humans it could understand that you know just like i can study an animal and learn something about that animal you know i could study apes and learn something about their culture and so on i'd have to be an ape to know that um i may not be completely but i can understand something so intel's machine can model that that's this part of the world is this part of the interactions the question we're trying to get at will the intelligent machine have its own personal agency that's beyond you know what we assign to it or it's its own personal you know goals or will it evolve and create these things my confidence comes from understanding the mechanisms i'm talking about creating this is not hand wave stuff it's down in the details we i'm going to build it and i know what it's going to look like and i know it's going to behave i know what the kind of things it could do and the kind of things it can't do just like when i build a computer i know it's not going to on its own decide to put another register inside of it it can't do that no way no matter what your software does it can't add a register to the computer um so in this way when we build ai systems we have to make choices about the the the under the how we embed them so i talked about this in the book i said you know it's a brain intelligence system is not just the neocortex equivalent you have to have that but it has to have some kind of embodiment physical a virtual it has to have some sort of goals it has to have some sort of uh ideas about dangers about things it shouldn't do like you know like we we build in safeguards into systems uh we have them in our bodies we have put them into cars right you know my car follows my directions until the day it sees i'm about to hit something and it ignores my directions and puts the brakes on so we can build those things in so that's a very interesting problem um how to build those in i think my my my differing opinion about the risks of ai for most people is that people assume that somehow those things will just appear automatically it'll evolve and intelligence itself begets that stuff or requires it but it's not intelligence of the neural cortex equipment doesn't require this the new cartridge equipment just says i'm a learning system tell me what you want me to learn and i'll tell you ask me questions i'll tell you the answers but in that again it's again like a map it doesn't a map has no intent about things but you can use it um to solve problems okay so the building engineering the neural cortex in itself is just creating an intelligent prediction system modeling system sorry modeling system yeah uh you can use it to then make predictions and then but you can also put it inside a thing that's actually acting in this world you have to put it inside something it's again think of the map analogy right map on its own doesn't do anything right it's just inert it's just it can learn but it's just so we have to embed it somehow in something to do something so so what's your intuition here you had a conversation with sam harris recently that was uh sort of um you've had a bit of a disagreement and you're sticking on this point you know elon musk stuart russell kind of have us worry existential threats of ai what's your intuition why if we engineer an increasingly intelligent neural cortex type of system in the computer why that shouldn't be a thing that we it was interesting we used the word intuition and sam harris used the word intuition too and and when he used that intuition that word i immediately stopped and said oh that's the problem he's using intuition i'm not speaking about my intuition yes i'm speaking about something i understand something i'm going to build something i am building something i understand completely or at least well enough to know what it's all i'm guessing i know what this thing's going to do and i think most people who are worried they have trouble separating out they don't have they don't have the um knowledge or the understanding about like what is intelligence how is it manifest in the brain how is it separate from these other functions in the brain and so they imagine it's going to be human-like or animal-like it's going to have it's going to have the same sort of drives and emotions we have but there's no reason for that that's just because there's there's unknown if you're if the unknown is like oh my god you know i don't know what this is going to do we have to be careful it could be like us but really smarter i'm saying no it won't be like us it'll be really smart but it won't be like us at all and um and but i i'm coming from that not because i just guessing i'm not intuitive using intuition i'm basically like okay i understand this thing works this is what it does let me explain it to you okay but uh to push back so i also disagree with the the intuitions that sam has but but i also disagree with what you just said which you know what's a good uh analogy so if you look at the twitter algorithm in in the early days just recommender systems you can understand how recommender systems work what you can't understand in the early days is when you apply that recommender system at scale to thousands and millions of people how that can change societies yeah so the question is yes you're just saying this is how an engineer in neurocortex works but the ques like when you have a very useful uh tic toc type of service that goes viral when your neural cortex goes viral and then millions of people start using it cannot destroy the world no uh well first of all this is back one thing i want to say is that uh ai is a dangerous technology i don't i'm not denying that all technology is dangerous well an ai maybe particularly so yeah okay so um am i worried about it yeah i'm totally worried about it the the thing where the narrow component we're talking about now is the existential risk of ai right so i want to make that distinction because i think ai can be applied poorly it can be applied in ways that you know people are going to understand the consequences of it these are all potentially very bad things but they're not the ai system creating this existential risk on its own and that's the only place i disagree with other people right so so i i think the existential risk thing is um humans are really damn good at surviving so to kill off the human race it'd be very very difficult you can even yes but you can even i'll go further i don't think ai systems are ever going to try to i don't think ar systems are ever going to like say i'm going to ignore you i'm going to do what i think is best i don't think that's going to happen at least not in the way i'm talking about it so you the twitter recommendation algorithm this interesting example let's let's use computer as an analogy again right i build a computer it's a universal computing machine i can't predict what people are going to use it for they can build all kinds of things they can they can even create computer viruses it's you know all kinds of stuff so there's some unknown about its utility about where it's going to go but on the other hand i pointed out that once i build a computer it's not going to fundamentally change how it computes it's like i use the example of a register which is a part internal part of a computer um you know i say it can't just say because computers don't evolve they don't replicate they don't evolve they don't you know the physical manifestation of the computer itself is not gonna there's certain things it can't do right so we can break into things like things that are possible to happen we can't predict and things are just impossible to happen unless we go out of our way to make them happen they're not going to happen unless somebody makes them happen yeah so there's there's a bunch of things to say one is the physical aspect which you're absolutely right we have to build a thing for it to operate in the physical world and you can just stop building them uh you know the moment they're not doing the thing you want them to do or just change the design or change the design the question is i mean there's it's possible in the physical world this is probably longer term is you automate the building it makes it makes a lot of sense to automate the building there's a lot of factories that are doing more and more and more automation to go from raw resources to the final product it's possible to imagine that it's obviously much more efficient to keep to create a factory that's creating robots that do something you know do something extremely useful for society it could be uh personal assistance it could be uh it could be it could be your toaster but a toaster that's much has deeper knowledge of your culinary preferences yeah and and that could uh well i think now you've hit on the right thing the real thing we need to be worried about lex is self-replication right that is the thing that we're in the physical world yeah or even the virtual world self-replication because self-replication is dangerous it's probably more likely to be killed by a virus you know or a human engineered virus anybody can create you know this the technology is getting so almost anybody but not anybody but a lot of people could create a human-engineered virus that could wipe out humanity that is really dangerous no intelligence required just self-replication so um so we need to be careful about that so when i think about you know ai i'm not thinking about robots building robots don't do that don't build a you know just well that's because you're interested in creating intelligence it seems like self-replication is a good way to make a lot of money well fine but so is you know maybe editing viruses is a good way to i don't know the point is if as a society when we want to look at existential risks the existential risks we face that that we can control almost all evolve around self-replication yes the question is i don't see a good uh way to make a lot of money by engineering viruses and deploying them in the world there could be there will be applications that are useful but let's separate out let's separate out i mean you don't need to you only need some you know terrorists who wants to do it because it doesn't take a lot of money to make viruses um let's just separate out what's risky and what's not risky i'm arguing that the intelligence side of this equation is not risky it's not risky it's not risky at all it's the self-replication side of the equation is risky and i'm not dismissing that i'm scared as hell it's like the paperclip maximizer thing yeah those are often like talked about in the same conversation um i think you're right like creating ultra-intelligent super-intelligent systems is not necessarily coupled with the self-replicating arbitrarily self-replicating systems yeah and you don't get evolution unless you're self-replicating yeah and so i think that's the gist of this argument that people have trouble separating those two out they just think oh yeah intelligence is like us and look how look at the damage we've done to this planet like how we've you know destroyed all these other species yeah well we replicate we're eight billion of us are seven million of us now so um i think the idea is that the the more intelligent we're able to build systems the more tempting it becomes from a capitalist perspective of creating products the more tempting it becomes to create self uh reproduction systems all right so let's say that's true so does that mean we don't build intelligent systems no that means we regulate we we understand the risks uh we regulate them yeah uh you know look there's a lot of things we could do a society which have some sort of financial benefit to someone which could do a lot of harm and we have to learn how to regulate those things we have to learn how to deal with those things i will argue this i would say the opposite i would say having intelligent machines at our disposal will actually help us in the end more because it'll help us understand these risks better and help us mitigate these risk riders there might be ways of saying oh well how do we solve climate change problems you know how do we do this or how do we do that that just like computers are dangerous in the hands of the wrong people but they've been so great for so many other things we live with those dangers and i think we have to do the same with intelligent machines we just but we have to be constantly vigilant about this idea of a bad actors doing bad things with them and b um don't ever ever create a self-replicating system um and by the way i don't even know if you could create a self-replicating system that uses a factory that's really dangerous you know nature's way of self-replicating is so amazing um you know it doesn't require anything it just me know the thing and resources and it goes right yeah if i said to you you know what we have to build uh our goal is to build a factory that can make that builds new factories and it has to end to end supply chain it has to mine the resources get the energy i mean that's really hard it's you know no one's doing that in the next you know 100 years i've been extremely impressed by the efforts of elon musk and tesla to try to do exactly that not not from raw resource well he actually i think states the goal is to go from raw resource to the uh the final car in one factory yeah that's that's the main goal of course it's not currently possible but they're taking huge leaps well he's not the only one to do that this has been a goal for many uh industries for a long long time um it's difficult to do well a lot of people what they do is instead they have like a million suppliers and then they like there's everybody's men they all co-locate them and they tie the systems together it's it's a fundamentally distributed even i think that's that also is not getting at the issue i was just talking about um which is self-replication it's um i mean self-replication means there's no entity involved other than the entity that's replicating um right and so if there's humans in this in the loop that's not really self-replicating right it's unless somehow we're duped but it's also i i don't necessarily agree with you because you've kind of mentioned that ai will not say no to us i i just think they will yeah yeah so like uh i think it's a useful feature to build in i'm just trying to like uh put myself in the mind of engineers to sometimes say no you know if you you yeah well i gave an example earlier right i get an example of my car yeah right my car turns the wheel and and applies the accelerator and the brake as i say until it decides there's something dangerous yes and then it doesn't do that yeah now that was something it didn't decide to do is something we programmed into the car uh and so good it was a good idea right the question again isn't like if we create an intelligent system will it ever ignore our commands of course it will on sometimes is it going to do it because it came up came up with its own goals that serve its purposes and it doesn't care about our purposes no i don't think that's going to happen okay so let me ask you about these uh super intelligent cortical systems that we engineer and us humans do you think uh with these entities operating out there in the world what does the future most promising future look like is it us merging with them or is it us like how do we keep us humans around when you have increasingly intelligent beings is it uh one of the dreams is to upload our minds in the digital space so can we just give our minds to these uh systems yeah so they can operate on them is there some kind of more interesting merger or is there more more in the third part of my book i talked about all these scenarios and let me just walk through them sure um the uploading the mind one yes extremely really difficult to do like like we have no idea how to do this even remotely right now um so it would be a very long way away but i make the argument you wouldn't like the result um and you wouldn't be pleased with the result it's really not what you think it's going to be um imagine i could upload your brain into into a computer right now and now the computer's sitting there going hey i'm over here great get rid of that old bio person i don't need them you're still sitting here yeah what are you gonna do no no that's not me i'm here right yeah are you gonna feel satisfied that then you but people imagine look i'm on my deathbed and i'm about to you know expire and i push the button and now i'm uploaded but think about it a little differently and and so i don't think it's going to be a thing because people by the time we're able to do this if ever because you have to replicate the entire body not just the brain it's it's really it's i walk through the issues it's really substantial um do you have a sense of what makes us us is there is there a shortcut to what can only save a certain part that makes us truly ours no but i think that machine would feel like it's you too right right if you people just like i have a child i have a child right i have two daughters they're independent people i created them well partly yeah and um uh i don't just because they're somewhat like me i don't feel i'm them and they don't feel like i'm me so if you split it apart you have two people so we can come back to what what makes what consciousness we want we can talk about that but we don't have a remote consciousness i'm not sitting there going oh i'm conscious of that you know i mean that system over there so let's say let's let's stay on our topic okay so one was uploading a brand yep ain't gonna happen in a hundred years maybe a thousand but i don't think people are gonna wanna do it the merging your mind with uh you know the neural link thing right like again really really difficult it's it's one thing to make progress to control a prosthetic arm it's another to have like a billion or several billion you know things and understanding what those signals mean like it's the one thing they're like okay i can learn to think some patterns to make something happen it's quite another thing to have a system a computer which actually knows exactly which cells it's talking to and how it's talking to them and interacting in a way like that very very difficult we're not getting anywhere closer to that um interesting can i uh can i ask a question here what so for me what makes that merger very difficult practically in the next 10 20 50 years is like literally the biology side of it which is like it's just hard to do that kind of surgery in a safe way but your intuition is even the machine learning part of it where the machine has to learn what the heck it's talking to that's even hard i think it's even harder and it's not it's it's easy to do when you're talking about hundreds of signals it's it's a totally different thing to say you're talking about billions of signals so you don't think it's the raw it's a machine learning problem you don't think it could be learned well i'm just saying no i think you'd have to have detailed knowledge you'd have to know exactly what the types of neurons you're connecting to i mean in the brain there's these they're neurons that do all different types of things it's not like a neural network it's a very complex organism system up here we talked about the grid cells or the place cells you know you have to know what kind of cells you're talking to and what they're doing and how their timing works and all all this stuff which you can't today there's no way of doing that right but i think it's i think it's a i think the problem you're right that the biological aspect of like who wants to have surgery and have this stuff inserted in your brain that's a problem but this is when we solve that problem i think the the information coding aspect is much worse i think that's much more it's not like what they're doing today today it's simple machine learning stuff because you're doing simple things but if you want to merge your brain like i'm thinking on the internet i'm merge my brain with the machine and we're both doing i that's a totally different issue that's interesting i i tend to think if okay if you have a super clean signal from a bunch of neurons at the start you don't know what those neurons are i think that's much easier than the getting of the clean signal i think if you think about today's machine learning that's what you would conclude right i'm thinking about what's going on in the brain and i don't reach that conclusion so we'll have to see sure but i don't think even even then i think there's kind of a sad future like you know do i do i have to like plug my brain into a computer i'm still a biological organism i assume i'm still going to die so what what have i achieved right you know what have i achieved to do some sort of oh i i disagree we don't know what those are but it seems like there could be a lot of different applications it's like virtual reality is to expand your brain's capability to uh to to like to read wikipedia yeah but but fine but but you're still a biological organization yes yes you know you're still you're still mortal you're still all right so what are you accomplishing you're making your life in this short period of time better right just like uh having the internet made our life better yeah yeah okay so i i think that's of of if i think about all the possible gains we can have here that's a marginal one it's an individual hey i'm better you know i'm smarter um but you know fine i'm not against it i just don't think it's earth-changing i but so this is the true of the internet when each of us individuals are smarter we get a chance to then share our smartness we get smarter and smarter together as like as a collective this is kind of like this ant colony but why don't i just create an intelligent machine that doesn't have any of this biological nonsense this is all the same it's it's everything except don't burden it with my brain yeah right it has a brain it is smart it's like my child but it's much much smarter than me so i have a choice between doing some implant doing some hybrid weird you know biological thing that bleeding and all these problems and limited by my brain or creating a system which is super smart that i can talk to um that helps me understand the world they can read the inter you know read wikipedia and talk to me i i guess my uh the open questions there are what does the manifestation of super intelligence look like so like what are we going to you talked about why do i want to merge with ai like what what's the actual marginal benefit here if i if we have a super intelligent system yeah how will it make our life better so let's let's that's a great question but let's break it down to little pieces all right on the one hand it can make our life better in lots of simple ways you mentioned like a care robot or something that helps me do things it cooks i don't know what it does right little things like that we have soup better smarter cars we can have you know better agents and aids helping us in our work environment and things like that to me that's like the easy stuff the simple stuff in the beginning um and so in the same way that computers made our lives better in ways many many ways i will have those kind of things to me the really exciting thing about ai is sort of its transcendent transcendent quality in terms of humanity we're still biological organisms we're still stuck here on earth it's going to be hard for us to live anywhere else i don't think you and i are going to want to live on mars anytime soon and um and we're flawed you know we may end up destroying ourselves it's totally possible uh we if not completely we could destroy our civilizations you know it's let's face the fact we have issues here but we can create intelligent machines that can help us in various ways for example one example i gave another sounds a little sci-fi but i believe this if we really wanted to live on mars we'd have to have intelligent systems that go there and build the habitat for us not humans humans are never going to do this it's just too hard um but could we have a thousand or ten thousand you know engineer workers up there doing this stuff building things terraforming mars sure maybe we can move to mars but then if we want to if we want to go around the universe should i send my children around the universe or should i send some intelligent machine which is like a child that represents me and understands our needs here on earth that could travel through space so it's sort of it in some sense intelligence allows us to transcend our the limitations of our biology uh with and and don't think of it as a negative thing it's in some sense my children transcend my the my biology too because they they live beyond me yeah um and we impart they represent me and they also have their own knowledge and i can impart knowledge to them so intelligent machines will be like that too but not limited like us but the question is um there's so many ways that transcendence can happen and the merger with ai and humans is one of those ways so you said intelligent basically beings or systems propagating throughout the universe representing us humans they represent us humans in the sense they represent our knowledge and our history not us individually right right but i mean the question is is it just the database with uh with the really damn good uh model no they're conscious conscious just like us okay but just different they're different just like my children are different they're like me but they're different um these are more different i guess maybe i've already i kind of i take a very broad view of our life here on on earth i say you know why are we living here are we just living because we live is are we surviving because we can survive are we fighting just because we want to just keep going what's the point of it yeah right so to me the point if i ask myself what's the point of life is what transcends that ephemeral sort of biological experience is to me this is my answer is the acquisition of knowledge to understand more about the universe and to explore and that's partly to learn more right i don't view it as a terrible thing if the ultimate outcome of humanity is we create systems that are intelligent that are our offspring but are not like us at all and we stay we stay here and live on earth as long as we can which won't be forever but as long as we can and but that would be a great thing to do it's not a it's not like a negative thing well would uh you'd be okay then if uh the human species vanishes but our knowledge is preserved and keeps being expanded by intelligent systems i want our knowledge to be preserved and expanded yeah am i okay with humans dying no i don't want that to happen but if if if it does happen what if we we were sitting here and this is uh we're the last two people on earth we're saying lex we blew it it's all over right yeah wouldn't i feel better if i knew that our knowledge was preserved and that we had agents represent that knew about that that were trans you know they were that left earth i would want that it's better than not having that you know i make the analogy of like you know the dinosaurs the poor dinosaurs they live for you know tens of millions of years they raised their kids they you know they they fought to survive they were hungry they they they did everything we do and then they're all gone yeah like you know and and if we didn't discover their bones nobody would ever know that they ever existed right do we want to be like that i don't want to be like that but there's a sad aspect to it and it's kind of it's jarring to think about that it's possible that a human-like intelligent civilization has previously existed on earth oh yeah the reason i say this is like it is jarring to think that we would not if they weren't extinct we wouldn't be able to find evidence of them after a sufficient amount after a sufficient amount of time of course there's like look basically humans like if we destroy ourselves now human civilization destroy ourselves now after a sufficient amount of time we would not be we'd find the evidence of the dinosaurs we would not find evidence of those humans yeah that's kind of an odd thing to think about although i'm not sure if we have enough knowledge about species going back for billions of years that we could we could we might be able to eliminate that possibility but it's an interesting question of course this is a similar question to you know there were lots of intelligent species throughout the without our galaxy that have all disappeared yeah that's super sad that um there exactly that there may have been much more intelligent alien civilizations in our galaxy that are no longer there yeah um you actually talked about this um that humans might destroy ourselves yeah and how we might preserve our knowledge yeah and advertise that knowledge to other advertisers a funny word to use from a pr person there's no financial gain in this uh you know like make it like from a tourism perspective make it interesting can you describe how well there's a couple things i broke it down to the two parts actually three parts one is um you know there's a lot of things we know that what if what if we were to what if we ended up our civilization collapsed yeah i'm not talking tomorrow yeah we could be a thousand years from now alex you know we don't really know but but historically would be likely at some point time flies when you're having fun yeah that's a good way to put it um you know could we and then then intelligent life evolved again on this planet wouldn't they want to know a lot about us and what we knew when they wouldn't be able to ask us questions so one very simple thing i said how would we archive what we know that was a very simple idea i said you know what that wouldn't be that hard but a few satellites you know going around this the sun and we upload wikipedia every day and um that kind of thing uh so you know we can end up killing ourselves well it's up there and the next intelligence piece will find it and learn something that would be they would like that they would appreciate that so that's one thing the next thing i said well what if you know how to outside of our solar system we have the seti program we're looking for these intelligent signals from everybody and if you do a little bit of math which i did in the book uh and you say well what if intelligent species only live for 10 000 years before you know technologically intelligent species like ones are really able to do this we're just starting to be able to do um well the chances are we wouldn't be able to see any of them because they would have all been disappeared by now um they would they've lived for 10 000 years and now they're gone and so we're not going to find these signals being sent from these people because i said what kind of signal could you create that would last a million years or a billion years that someone would say damn it someone smart lived there we know that that would be a life-changing event for us to figure that out well what we're looking for today in the study program isn't that we're looking for very coded signals in some sense and so i asked myself what would be a different type of signal one could create i've always thought about this throughout my life and in the book i gave one one possible suggestion which was um we now detect planets going around other other suns other stars excuse me and we do that by seeing this the the slight dimming of the light as the planets move in front of them that's how we detect uh planets elsewhere in our galaxy um what if we created something like that that just rotated around our around the sun and it blocked out a little bit of light in a particular pattern that someone said hey that's not a planet that is a sign that someone was once there you can say what if it's beating up pi you know three point whatever um so the idea of a distance you can from a distance broadly broadcast takes no continue activation on our part this is the key right no one has to be seen here running a computer and supplying it with power it just goes on so we go it's continues and and i argue that part of the study program should be looking for signals like that and to look for signals like that you ought to figure out what the how would we create a signal like what would we create that would be like that that would persist for millions of years that would be broadcast broadly you could see from a distance that was unequivocal it came from an uh by an intelligent species and so i gave that one example um because they don't know what i know of actually and then and then finally right if if our ultimately our solar system will die at some point in time you know how do we go beyond that and i think it's possible if at all possible we'll have to create intelligent machines that travel throughout this throughout the the solar system or throughout the galaxy and i don't think that's going to be humans i don't think it's going to be biological organisms so these are just things to think about you know like what's the you know like i don't i don't want to be like the dinosaurs i don't want to just live and okay that was it we're done you know well there is a kind of presumption that we're going to live forever which uh i i think it is a bit sad to imagine that the message we send as as you talk about is that we were once here instead of we are here well it could be we are still here uh but it's more of a it's more of an insurance policy in case we're not here you know well i don't know but there's something i think about we as humans don't often think about this but it's like like whenever i um record a video i've done this a couple times in my life i've recorded a video for my future self just for personal just for fun and it's always just fascinating to think about that preserving yourself for future civilizations for me it was preserving myself for future me but that's a little that's a little fun example of archival these podcasts are are preserving you and i in a way yeah for future uh hopefully well after we're gone but you don't often we're sitting here talking about this you are not thinking about the fact that you and i are going to die and there will be like 10 years after somebody watching this and we're still alive you know in some sense i do i'm here because i want to talk about ideas right and these ideas transcend me and they transcend this time in on our planet um we're talking here about ideas that could be around a thousand years from now or a million years from now i when i wrote my book i had an audience of mine and one of the clearest audiences was aliens no were people reading this 100 years from now yes i said to myself how do i make this book relevant to summer reading this 100 years from now what would they want to know that we were thinking back then what would make it like that was an interesting it's still an interesting book i'm not sure i can achieve that but that was how i thought about it because these ideas like especially in the third part of the book the ones we're just talking about you know these crazy it sounds like crazy ideas about you know storing our knowledge and and you know merging our brains of computers and sending you know our machine down to space is not going to happen in my lifetime um and they may not have been happening the next 100 years it may not happen for a thousand years who knows uh but we have the unique opportunity right now we you me and other people like this um to sort of at least propose the agenda um that might impact the future like that that's a fascinating way to think uh both like writing or creating try to make try to create ideas try to create things that uh hold up in time yeah you know understanding how the brain works we're gonna figure that at once that's it it's gonna be figured out once and after that that's the answer and people will people will study that thousands of years now we still we still you know venerate newton and einstein and um and you know because because ideas are exciting even well into the future well the interesting thing is like big ideas even if they're wrong are still useful like yeah especially if they're not completely wrong like right newton's laws are not wrong they're just einsteins they're better um well it's so yeah i mean but we're talking with newton and einstein we're talking about physics i wonder if we'll ever achieve that kind of clarity but understanding um like complex systems and the this particular manifestation of complex systems which is the human brain i'm totally optimistic we can do that i mean we're making progress at it i don't see any reasons why we can't completely i mean completely understand in the sense um you know we don't really completely understand what all the molecules in this water bottle are doing but you know we have laws that sort of capture it pretty good um and uh so we'll have that kind of understanding i mean it's not like you're gonna have to know what every neuron in your brain is doing um but enough to uh first of all to build it and second of all to do you know do what physics does which is like have uh concrete experiments where we can validate we're we're this is happening right now like it's not this is not some future thing um you know i'm very optimistic about because i know about art our work and what we're doing i have to prove it to people um but um i i consider myself a rational person and um you know until fairly recently i wouldn't have said that but right now i'm where i'm sitting right now i'm saying you know we can this is going to happen there's no big obstacles to it um we finally have a framework for understanding what's going on in the cortex and um and that's liberating it's it's like oh it's happening so i i can't see why we wouldn't be able to understand it i just can't okay oh so i mean on that topic let me ask you to play devil's advocate is it possible for you to imagine luck look a hundred years from now and looking at your book uh in which ways might your ideas be wrong oh i worry about this all the time um yeah it's still useful yeah yeah i think there's you know um well i can i can best relate it to like things i'm worried about right now so we talk about this voting idea right it's happening there's no question that's happening but it could be far more there's there's enough things i don't know about it that it might be working in ways differently i'm thinking about the kind of what's voting who's voting you know where are representations i talked about you have a thousand models of a coffee cup like that that could turn out to be wrong because it may be maybe there are a thousand models that are sub models but not really a single model the coffee cup um i mean there's things these are all sort of on the edges things that i i present as like oh it's so simple and clean well it's not that it's always going to be more complex and um and there's parts of the theory which i don't understand the complexity well so i think i think the idea is brain is a distributed modeling system is not controversial at all right that's not that's well understood by many people the question then is are each quarter of a column an independent modeling system right um i could be wrong about that um i don't think so but i worry about it my intuition not even thinking why you could be wrong is the same intuition i have about any sort of physicist like strength theory that we as humans desire for a clean explanation and uh 100 years from now uh intelligent systems might look back at us and laugh at how we try to get rid of the whole mess by having simple explanation when the reality is it's it's way messier and in fact it's impossible to understand you can only build it it's like this idea of complex systems and cellular automata yeah you can only launch the thing you cannot understand it yeah i think that you know the history of science suggests that's not likely to occur the history of science suggests that look as a theorist and we're theorists you look for simple explanations right fully knowing that whatever simple explanation you're going to come up with is not going to be completely correct i mean it can't be i mean it's just it's just more complexity but that's the role of theorists play they they sort of they give you a framework on which you now can talk about a problem and figure out okay now we can start dig in more details the best frameworks stick around while the details change you know again you know the classic example is newton and einstein right you know um newton's theories are still used they're still valuable they're still practical they're not like wrong it's just they've been refined yeah but that's in physics it's not obvious by the way it's not obvious for physics either that the universe should be such that it's amenable to these simple but so far it appears to be as far as we can tell um yeah i mean but as far as we could tell and but it's also an open question whether the brain is amenable to such clean theories that's the uh not the brain but intelligence well i i i don't know i would take intelligence out of it just say you know um well okay um the evidence we have suggests that the human brain is a at the one time extremely messy and complex but there's some parts that are very regular and structured that's why we started the neocortex it's extremely regular in its structure yeah and unbelievably so and then i mentioned earlier the other thing is it's it's universal abilities it is so flexible to learn so many things we don't we haven't figured out what it can't learn yet we don't know but we haven't figured out yet but to learn things that it never was evolved to learn so those give us hope um that's why i went into this field because i said you know this regular structure it's doing this amazing number of things there's got to be some underlying principles that are that are common and other other scientists have come up with the same conclusions um and so it's promising it's promising and um and that's and whether the theories play out exactly this way or not that is the role that theorists play and so far it's worked out well even though you know maybe you know we don't understand all the laws of physics but so far it's been pretty damn useful the ones we have are our theories are pretty bit useful you mentioned that uh we should not necessarily be at least to the degree that we are worried about the existential risks of artificial intelligence relative to uh human risks from human nature being existential risk what aspect of human nature worries you the most in terms of the survival of the human species i mean i'm disappointed in humanity as humans i mean all of us i'm one so i'm at disappointed myself too it's kind of a sad state there's two things that disappoint me one is how it's difficult for us to separate our rational component of ourselves from our evolutionary heritage which is you know not always pretty you know rape is a is an evolutionary good strategy for reproduction murder can be at times too you know making other people miserable at times is a good strategy for reproduction it's just and it's just and and so now that we know that and yet we have this sort of you know we and i can have this very rational discussion talking about you know intelligence and brains and life and so on so it seems like it's so hard it's just a big transition to get humans all humans to to to make the transition from be like let's pay no attention to all that ugly stuff over here let's just focus on the instances what's unique about humanity is our knowledge and our intellect but the fact that we're striving is in itself amazing right the fact that we're able to overcome that part and it seems like we are more and more becoming successful and overcoming that that is the optimistic view and i agree with you yeah but i worry about it i'm not saying i'm worrying about i think maybe that was your question i still worry about it yes um you know we could be end tomorrow because some terrorists could get nuclear bombs and you know blow us all up who knows right the other thing i think i'm disappointed is uh and it's just i understand it it's i guess you can't really be disappointed it's just a fact is that we're so prone to false beliefs that we you know we have a model in our head the things we can interact with directly physical objects people that model is pretty good and we can test it all the time right i touch something i look at it i talk to you see my model's correct but so much of what we know is stuff i can't directly interact with i can't i don't know because someone told me about it yeah and so so we're prone inherently prone to having false beliefs because if i'm told something how am i going to know it's right or wrong right and so then we have the scientific process which says we are inherently flawed so the only way we can get closer to the truth is by looking for um contrary evidence yeah um like this uh conspiracy theory this this theory that scientists keep telling me about that the earth is round uh as far as i can tell when i look out it looks pretty flat yeah so yeah there is there's a tension but it's also um um i tend to believe that we haven't figured out most of this thing right most of nature around us is a mystery and so it um but that doesn't work does that worry you i mean it's like oh that's that's like a pleasure more to figure out right yeah that's exciting but i'm saying like there's going to be a lot of quote unquote wrong ideas i mean i've been thinking a lot about engineering systems like social networks and so on and i've been worried about censorship and thinking through all that kind of stuff because there's a lot of wrong ideas there's a lot of dangerous ideas but then i also read a history read history and see when you censor ideas that are wrong now this could be a small-scale censorship like a young grad student who comes up who like raises their hand and says some crazy idea yes a form of censorship could be i shouldn't use the word censorship but i think you may uh just like like de-incentivize them from no no no this is the way it's been yeah yeah you're you're a foolish kid don't do it yeah yeah yeah foolish uh so in some sense uh those wrong ideas most of the time end up being wrong but sometimes i agree with you so i don't like the word censorship um at the very end of the book i i ended up with a sort of a a plea or a recommended course of action and the best way i could i know how to deal with this issue that you bring up is if everybody understood as part of your upbringing in life something about how your brain works that it builds a model of the world uh how it works you know how basically builds that model the world and that the model is not the real world it's just a model and it's never going to reflect the entire world and it can be wrong and it's easy to be wrong and here's all the ways you can get the wrong model in your head right it's not prescribed what's right or wrong just understand that process if we all understood the process then i got together and you say i disagree with you jeff and i said lex i disagree with you that at least we understand that we're both trying to model something we both have different information which leads to our different models and therefore i shouldn't hold it against you and you shouldn't hold it against me and we can at least agree that well what can we look for in in its common ground to test our our beliefs as opposed to so much uh as our we raise our kids on dogma which is this is a fact and this is a fact and these people are bad and and you know where ever if everyone knew just to to be skeptical of every belief and why and how their brains do that i think we might have a better world do you think the human mind is able to comprehend reality so you talk about sort of this creating models that are better and better how close do you think we get to uh to reality there's so the wildest ideas is like donald hoffman saying we're very far away from reality uh do you think we're getting close to reality well i guess it depends on what you define reality uh we are getting we have a model of the world that's very useful right for basic well for our survival and our the pleasure whatever right um so that's useful um i mean it's really useful oh we can build planes we can build computers we can do these things right uh i don't think i i don't know the answer to that question um i think that's part of the question we're trying to figure out right like you know obviously if you end up with a theory of everything that really is a theory of everything and all of a sudden everything comes into play and there's no room for something else then you might feel like we have a good model of the world yeah but we if we have a theory of everything and somehow first of all you'll never be able to really conclusively say it's a theory of everything but say somehow we are very damn sure it's the theory of everything we understand what happened at the big bang and how just the entirety of the physical process i'm still not sure that gives us an understanding of uh the next many layers of the hierarchy yeah abstractions that form well also what if string theory turns out to be true and then you say well we have no reality no modeling what's going on in those other dimensions that are wrapped into it on each other you're right or or the multiverse you know i i honestly don't know how for us for human interaction for ideas of intelligence how it helps us to understand that we're made up of vibrating strings that are like tend to the whatever times smaller than us yeah i don't you know you could probably build better weapons and better rockets but you're not going to be able to understand intelligence i guess i guess maybe better computers no you won't be able i think it's just more purely knowledge you might lead to a better understanding of the of the beginning of the universe right it might lead to a better understanding of uh i don't know i guess i think the acquisition of knowledge has always been one where you you pursue it for its own pleasure um and you don't always know what is going to make a difference yeah you're pleasantly surprised by the the weird things you find do you think uh for the for the neocortex in general do you think there's a lot of innovation to be done on the machine side you know you use the computer as a metaphor quite a bit is there a different types of computer that would help us build i mean what are the intelligences like the manifestations of intelligent machines yeah or is it oh no it's going to be totally crazy uh we have no idea how this is going to look out yet but you can already see this today we of course remodel these things on traditional computers and now now gpus are really popular with with you know neural networks and so on um but there are companies coming up with fundamentally new physical substrates that are just really cool i don't know if they're going to work or not but i think there'll be decades of innovation here yeah totally do you think the final thing will be messy like our biology is messy or do you think um it's it's the it's the old bird versus airplane question or do you think we could just uh build airplanes yeah that that fly way better than birds in the same way we can build uh uh electrical and yeah yeah you know can i can i can i refund the bird thing a bit because i think it's interesting people ability misunderstand this the wright brothers um the problem they were trying to solve was controlled flight how to turn an airplane not how to propel an airplane they weren't worried about that interesting yeah they already had at that time there was already wing shapes which they had from studying birds there was already gliders that carry people the problem is if you put a rudder on the back of a glider and you turn it the plane falls out of the sky so the problem was how do you control flight and they studied birds and they actually had birds in captivity they watched birds in wind tunnels they observed in the wild and they discovered the secret was the birds twist their wings when they turn and so that's what they did on the wright brothers flyer they had these sticks you would twist the wing and that was that was their innovation not their propeller and today airplanes still twist their wings we don't twist the entire wing we just just the tail end of it the the the flaps which is the same thing so today's airplanes fly on the same principles as birds which is observed by so everyone get that analogy wrong but let's step back from that right once you understand the principles of flight you can choose how to implement them yeah no one's going to use bones and feathers and muscles um but they do have wings and uh we don't flap them we have propellers so when we have the principles of of computation that goes on to modeling the world in the brain we understand those principles very clearly we have choices on how to implement them and some of them will be biologically like and some won't and um but i do think there's going to be a huge amount of innovation here just think about the innovation we're in the computers they had to invent the the transistor they invented the the silicon ship they had the invent you know then this software i mean zillions of things they had to do memory systems um we're gonna do it's gonna be similar well it's interesting that the deep learning um the effectiveness of deep learning for a specific task is driving a lot of innovation in the hardware which may have effects for uh actually allowing us to discover intelligent systems that operate very differently or that's much bigger than deep learning yeah interesting so ultimately it's good to have an application that's making our life better now because the the the capitalist process if you can make money yeah yeah that works i mean the other way i mean neil degrasse tyson writes about this is the other way we fund science of course is through military so like yeah uh conquest so here here's an interesting thing we're doing on this regard so we've decided we we used to have a series these biological principles and we can see how to build these intelligent machines but we've decided to apply some of these principles to today's machine learning techniques so uh one of the we didn't talk about this principle one is uh sparsity in the brain um most of the neurons are active at any point in time as far as and the connectivity is sparse and that's different than deep learning networks um so we've already shown that we can speed up existing deep learning networks uh anywhere from 10 to a factor of 100 i mean literally 100 and make it more robust at the same time so this is commercially very very valuable um and so you know if we can prove this actually in the larger systems that are commercially applied today there's a big commercial desire to do this well sparsity is something that doesn't run really well on existing hardware it doesn't really run really well on gpus um and on cpus and so that would be a way of sort of bringing more and more brain principles into the existing system on a on a commercially valuable basis another thing we can think we can do is we're going to use the dendrites models of we i talked earlier about the the prediction occurring inside of neuron that that basic property can be applied to existing neural networks and allow them to learn continuously which something they don't do today and so yeah well we wouldn't model this spikes but the idea that you have that neuro today's neural networks have something called the point neuron which is a very simple model of a neuron and uh by adding dendrites to them with just one more level of complexity that's in biological systems you can solve problems in continuous learning um and rapid learning so we're trying to take we're trying to bring the existing field and we'll see if we can do it we're trying to bring the existing field of machine learning commercially along with us you brought up this idea of keeping you know paying for it commercially along with us as we move towards the ultimate goal of a true ai system even small innovations on neural networks are really really exciting yeah because it seems like such a trivial model of the brain and applying different insights that just even like you said continuous uh learning or uh making it more asynchronous or maybe making more dynamic or like uh incentivizing making it fast even just from robots and making it somehow much better incentivizing sparsity uh somehow yeah uh well if you can make things 100 times faster then there's plenty of incentive people people spending millions of dollars you know just training some of these networks now these uh these transformer networks let me ask you a big question how for young people uh listening to this today in high school and college what advice would you give them in terms of uh which career path to take and um maybe just about life in general well in my case um i didn't start life with any kind of goals i was when i was going to college i was like oh what did i say well maybe i'll do electrical engineering stuff you know um it wasn't like you know today you see some of these young kids are so motivated they're going to change the world i was like you know whatever and um but then i did fall in love with something besides my wife but i fell in love with this like oh my god it would be so cool to understand how the brain works and then i i said to myself that's the most important thing i could work on i i can't imagine anything more important because if we understand how brains work you'd build telescope machines and they could figure out all the other big questions of the world right so and then i said i want to understand how i work so i fell in love with this idea and i became passionate about it and this is you know a trope people say this but it was it's true because i was passionate about it i was able to put up almost so much crap you know you know i was i was in that you know i was like person said you can't do this i was i was a graduate student at berkeley when they said you can't study this problem you know no one's gonna solve this or you can't get funded for it you know then i went to do you know mobile computing and it was like people say you can't do that you can't build a cell phone you know so but all along i kept being motivated because i wanted to work on this problem i said i want to understand the brain works and if i got myself male i got one lifetime i'm gonna figure it out do the best i can so by having that because you know these it's really as you point out lex it's really hard to do these things people it's just there's so many downers along the way so many ways obstacles are getting your way yeah i'm sitting here happy all the time but trust me it's not always like that that's i guess the the happiness that the the passion is a prerequisite for surviving the whole yeah i think so i think that's right um and so i i don't want to sit to someone and say you know you need to find a passion and do it no maybe you don't but if you do find something you're passionate about then then you can follow it as far as your passion will let you put up with it do you remember how you found it this is how the spark happened why specifically for me yeah like because you said it's such an interesting so like almost like later in life by later i mean like not in when you were five yeah you you didn't really know and then all of a sudden you fell in love with that yeah yeah there was there was there's two separate events that compounded one another one when i was probably a teenager might have been 17 or 18. i made a list of the most interesting problems i could think of first was why does the universe exist it seems like not existing is more likely yeah the second one was well given exists why does it behave the way it does you know it's laws of physics y is equal to m c squared not m c cubed you know attention question i don't know third one was like what's the origin of life um and the fourth one was what's intelligence and i stopped there i said well that's probably the most interesting one and i put that aside um as a teenager but then when i was 22 and i was reading the um no it was excuse me i was 70 it was 1979 excuse me 1979 i was reading uh so i was at that time i was 22. i was reading uh the september issue of scientific american which is all about the brain and then the final essay was by francis crick who of dna fame and he had taken his interest to studying the brain now and he said you know there's something wrong here he says we got all this data oh this fact this is 1979 all these facts about the brain tons and tons of facts about the brain do we need more facts or do we just need to think about a way of rearranging the facts we have maybe we're just not thinking about the problem correctly you know because he says this shouldn't be it shouldn't be like this you know so i read that and i said wow i said i don't have to become like an experimental neuroscientist i could just look at all those facts and try to and become a theoretician and try to figure it out and i said that i felt like it was something i would be good at i said i wouldn't be a good experimentalist i don't have the patience for it but i'm a good thinker and i love puzzles and this is like the biggest puzzle in the world it's the biggest puzzle of all time and i got all the puzzle pieces in front of me damn that was exciting and there's something obviously you can't convert it towards it just kind of sparked this passion and i have that a few times in my life just something um yeah just just like you uh it grabs you yeah i thought it was something that was both important that i could make a contribution to yeah and so all of a sudden it felt like oh it gave me purpose in life yeah you know i honestly don't think it has to be as big as one of those four questions no no i think you can find those things in in the smallest oh absolutely i'm with uh david foster wallace said like the key to life is to be unborable i'm i think i think it's very possible to find that intensity of joy in the smallest absolutely i'm just you asked me my story yeah yeah i'm actually speaking to the audience yeah it doesn't have to be those four you happen to get excited by one of the bigger questions of in the universe but uh but that even the smallest things and watching the olympics now just uh just giving yourself life uh giving your life over to the study and the mastery of a particular sport is fascinating and and uh if if it sparks joy and passion you're able to in the case of the olympics basically suffer for like a couple of decades to achieve i mean you can find joint passion just being a parent i mean yeah yeah the the parenting one is funny so i always uh not always but for a long time wanted kids and get married and stuff and especially that has to do with the fact that i've seen a lot of people that i respect get a whole other level of joy from kids and you know at first is like your thinking is well like i don't have enough time in the day right if i have this passion which is true yes but like if i want to solve intelligence how is this kids situation gonna help me but then you realize that uh you know like you said the things that sparks joy and it's very possible that kids can provide even a greater or deeper more meaningful joy than those bigger questions yeah when they they enrich each other and that that seemed like um obviously when i was younger it's probably a counter-intuitive notion because there's only so many hours in the day but then life is finite and you have to pick the things that give give you joy yeah but you know also you understand you you can be patient too i mean it's finite but we do have you know whatever 50 years or so it's not so long yeah so so in my case you know in my case i had to give up on my dream of the neuroscience because i i was a graduate student at berkeley and they told me i couldn't do this and i couldn't get funded and you know and and so i went back in and went back in the computing industry for a number of years i thought it would be four but it turned out to be more but i said but i said i'll come back you know i definitely i'm definitely gonna come back i know i'm gonna do this computer stuff for a while but i'm definitely coming back everyone knows that and it's they moved like raising kids well yeah you still you have to spend a lot of time with your kids it's fun enjoyable um but that doesn't mean you have to give up on other dreams it just means that you may have to wait a week or two to work on that next idea well you talked about the the the darker side of me disappointing sides of human nature that we're hoping to overcome so that we don't destroy ourselves i tend to put a lot of value in um the broad general concept of love of uh the human capacity to um of compassion towards each other of just kindness whatever that longing of like just the human human to human connection yeah it connects back to our initial discussion i tend to see a lot of value in this collective intelligence aspect i think some of the magic of human civilization happens when there's uh a party is not as fun when it you're alone yeah i totally agree with you on these issues uh do you think from a neurocortex perspective uh what role does love play in the human condition uh well those are two separate things from a new project i don't think it doesn't impact our thinking about human uh about the neocortex from a human condition point of view i think it's core um i mean we get so much pleasure out of loving people and helping people um so you know i can i'll rack it up to old brain stuff and maybe you can throw it under the the bust of evolution if you want um that's fine um uh it doesn't impact how i think about how we model the world but from a humanity point of view i think it's essential well i tend to give it to the new brain and also i tend to think that some of aspects of that need to be engineered into ai systems both in their ability to have compassion for other humans and their ability to maximize love in the world between humans so i'm more thinking about the social network so like whenever there's a deep integration between ai systems and humans so specific applications where it's uh ai and humans i think that's something that often not talked about in terms of um metrics over which you try to maximize uh like which metric to maximize in a system it seems like one of the most powerful things in societies is the capacity to work it's fascinating i think it's it's a great way of thinking about it you know i have i have been thinking more of these fundamental mechanisms in the brain as opposed to the social interaction between the interaction between humans and ai systems in the future which is and i think if you think about that you're absolutely right um but that's that's a complex system i can have intelligent systems that don't have that component but they're not interacting with people you know they're just running something or building a building someplace or something i don't know um but if you think about interacting with humans yeah it's it's gonna and then but it has to be engineered in there i don't think it's gonna appear on its own uh that's a good question i yeah well we could in terms of uh uh from a reinforcement learning perspective whether the darker sides of human nature or the better angels of our nature uh win out yeah statistically speaking i don't know i tend to be optimistic and hope that love wins out in the end um you've done a lot of incredible stuff and your book is uh driving towards this fourth question that you started with on the nature of intelligence what do you hope your legacy for people reading a hundred years from now how do you hope they remember your work how do you hope they remember this book well i think as an entrepreneur or scientist or any human who's trying to accomplish some things i have a view that really all you can do is accelerate the inevitable um yeah it's like you know if we didn't figure out if we didn't study the brain someone else would study the brain if you know if elon just didn't make electric cars someone else would do it eventually and if you know if thomas anderson didn't invent a light bulb we wouldn't be using candles today so what you can do as an individual is you can accelerate something that's beneficial and make it happen sooner than whatever that's that's really it that's all you can do um you can't create a new reality that it wasn't gonna happen um so from that perspective um i would hope that our work not just me but our work in general um people would look back and said hey they really helped make this better future happen sooner um they you know they helped us understand the nature of false beliefs sooner than we met up they made it now we're so happy that we have these intelligent machines doing these things helping us that that maybe that solved the climate change problem and they made it happen sooner so i think that's the best i would hope for some would say those guys just moved the needle forward a little bit in time well i do it it feels like the progress of human civilization is not is uh there's a lot of trajectories and if you have individuals that accelerate towards one direction that helps steer human civilization so i think in this long stretch of time all all trajectories will be traveled but i think it's nice for this particular civilization on earth to travel down one that's not yeah well i think you're right i mean look we have the take the whole period of you know world war ii nazism or something like that well that was a bad sidestep right went over there for a while but you know there is the optimistic view about life that um that ultimately it does converge in a positive way it progresses ultimately even if we have years of darkness um so yeah so i think you can perhaps that's accelerating the positive it could also mean eliminating some bad missteps along the way too um but but i i'm an optimistic in that way i was like you know despite we talked about the end of civilization you know i i think we're gonna live for a long time i hope we are um i think our society in the future is gonna be better we're gonna have less discord we're gonna have less people killing each other you know we'll solve you know we'll make the they'll live in some sort of way that's compatible with the carrying capacity of the earth um i'm optimistic these things will happen and all we can do is try to get there sooner and at the very least if we do destroy ourselves we'll have a few satellites i will uh that will tell alien civilization that we were once or maybe our future you know future inhabitants of earth you know imagine you know the planet of the apes scenario you know we kill ourselves in a you know million years from now or billion years from now there's another species on the planet curious creatures were once here yeah um jeff thank you so much for your work and um thank you so much for talking to me once again well it's great i love what you do i love your podcast you have the most interesting people me aside so it's a real service i think you do for uh a very broader sense for humanity i think thanks jeff all right pleasure thanks for listening to this conversation with jeff hawkins and thank you to codeacademy bio optimizers expressvpn asleep and blinkist check them out in the description to support this podcast and now let me leave you with some words from albert camus an intellectual is someone whose mind watches itself i like this because i'm happy to be both haves the watcher and the watched can they be brought together this is a practical question we must try to answer thank you for listening and hope to see you next time youthe following is a conversation with jeff hawkins a neuroscientist seeking to understand the structure function and origin of intelligence in the human brain he previously wrote the seminal book on the subject titled on intelligence and recently a new book called a thousand brains which presents a new theory of intelligence that richard dawkins for example has been raving about calling the book quote brilliant and exhilarating i can't read those two words and not think of him saying it in his british accent quick mention of our sponsors codecademy bio optimizers expressvpn a-sleep and blinkist check them out in the description to support this podcast as a side note let me say that one small but powerful idea that jeff hawkins mentions in his new book is that if human civilization were to destroy itself all of knowledge all our creations will go with us he proposes that we should think about how to save that knowledge in a way that long outlives us whether that's on earth in orbit around earth or in deep space and then to send messages that advertise this backup of human knowledge to other intelligent alien civilizations the main message of this advertisement is not that we are here but that we were once here this little difference somehow was deeply humbling to me that we may with some non-zero likelihood destroy ourselves and that an alien civilization thousands or millions of years from now may come across this knowledge store and they would only with some low probability even notice it not to mention be able to interpret it and the deeper question here for me is what information in all of human knowledge is even essential does wikipedia capture it or not at all this thought experiment forces me to wonder what are the things we've accomplished and are hoping to still accomplish that will outlive us is it things like complex buildings bridges cars rockets is it ideas like science physics and mathematics is it music and art is it computers computational systems or even artificial intelligence systems i personally can't imagine that aliens wouldn't already have all of these things in fact much more and much better to me the only unique thing we may have is consciousness itself and the actual subjective experience of suffering of happiness of hatred of love if we can record these experiences in the highest resolution directly from the human brain such that aliens will be able to replay them that is what we should store and send as a message not wikipedia but the extremes of conscious experiences the most important of which of course is love this is the lex friedman podcast and here is my conversation with jeff hawkins we previously talked over two years ago do you think there's still neurons in your brain that uh remember that conversation that uh remember me and got excited like there's a lex neuron in your brain that just like finally has a purpose i do remember our conversation or i have some memories of it and i formed additional memories of you in the meantime um i wouldn't say there's a neuron or a neurons in my brain that know you there are synapses in my brain that have formed that reflect my knowledge of you and the model i have of you in the world and whether the exact same synapses were formed two years ago it's hard to say because these things come and go all the time but we know from one thing to know about brains is that when you think of things you often erase the memory and rewrite it again so yes but i have a memory of you and i have that's instantiated in synapses there's a simpler way to think about it like so you have we have a model of the world in your head and that model is continually being updated i updated this morning you offered me this water you said it was from the refrigerator i remember these things and so we and so the model includes where we live the places we know the words the objects in the world it's a monstrous model and it's constantly being updated and people are just part of that model so we're animals or other physical objects so our events we've done so um it's there's no special in my mind special place for the memories of humans i mean obviously i know you know i know a lot about my wife um but and friends uh and so on but it's not like a special place for humans over here but we model everything and we model other people's behaviors too so if i said you're a copy of your mind in my mind it's just because i know how humans i've learned how humans behave and um and i've learned some things about you and that's part of my world model well i just also mean the collective intelligence of the human species i wonder if there's something fundamental to the brain that enables that so modeling other humans with their ideas you're actually jumping into a lot of big topics like collective intelligence is a separate topic that a lot of people like to talk about we can talk about that uh but um and so that's interesting like you know we're not just individuals we live in society and so on but from our research point of view and so again let's just talk we study the neocortex it's a sheet of neural tissue it's about 75 of your brain it runs on this very repetitive algorithm it's a very repetitive circuit and so you can apply that algorithm to lots of different problems but it's all underneath it's the same thing we're just building this model so from our point of view we wouldn't look for these special circuit someplace buried in your brain that might be related to other you know understanding of the humans it's more like you know how do we build a model of anything how do we understand anything in the world and humans are just another part of the things we understand so there's nothing uh there's nothing to the brain that knows the emergent phenomena of collecting the intelligence well i certainly know about that i've heard the terms i've read no but that's right right well okay right as an idea well i think we have language which is is sort of built into our brains and that's a key part of collective intelligence so there are some you know prior assumptions about the world we're going to live in when we're born we're not just a blank slate um and so you know did we evolve to take advantage of those situations yes but again we study only part of the brain the neocortex there's other parts of the brain are very much involved in societal interactions and human emotions and um and how we interact and even societal um issues about you know how we are how we interact with other people when we support them when we're greedy and things like that i mean certainly the brain is a great place where to study intelligence i wonder if it's the fundamental uh atom of intelligence well i would say it's it's it's absolutely an essential component even if you believe in collective intelligence as um hey that's where it's all happening that's what we need to study which i don't believe that by the way i think it's really important but i don't think that is the thing um but even if you do believe that then you have to understand how the brain works in doing that um it's you know it's more like we are intelligent and we are intelligent individuals and together we are much more magnified our intelligence we can do things that we couldn't do individually but even as individuals we're pretty damn smart and we can model things and understand the world and interact with it so um to me if you're going to start some place you need to start with the brain then you could say well how do brains interact with each other and what is the nature of language and how do we share models that i've learned something about the world how do i share it with you which is really what you know sort of communal intelligence is i know something you know something we've had different experiences in the world i've learned something about brains maybe i can impart that to you you've learned something about you know whatever physics and you can part that to me but it also comes down to even just the epistemological question of well what is knowledge and how do you represent it in the brain right and it's not that's where it's going to reside right or in our writings it's obvious that human collaboration human interaction is how we build societies right but some of the things you talk about and work on some of those elements of what makes up an intelligent entity is there with a single person oh absolutely i mean it'd be we can't deny that the brain is the core element here in in uh at least i can't i think it's obvious the brain is the core element in all theories of intelligence uh it's where knowledge is represented it's where knowledge is created we interact we share we build upon each other's work but uh without a brain you'd have nothing you know there would be no intelligence without brains and so um so that's where we start i got into this field because i just was curious as to who i am you know how you know how do i think what's going on in my head when i'm what i'm thinking what does it mean to know something you know i can ask what it means for me to know something independent of how i learned it from you or from someone else or from society so what does it mean for me to know that i have a model of you in my head what does it mean to know i know what this microphone does and how it works physically even though i can't see it right now how do i know that what does it mean how the neurons do that at the fundamental level of neurons and synapses and so on those are really fascinating questions and uh i'm happy to be just happy to understand those if i could so in your um in your new book you talk about our brain our mind as being made up of many brains uh so the book is called the thousand brains a thousand brain theory of intelligence what is the key idea of this book uh the book has three sections and it has sort of maybe three big ideas so the first section is all about what we've learned about the neurocortex and that's the thousand brains theory just did we complete the picture the second section is all about ai and the third section is about the future of humanity so the thousand brains theory the the big idea there if i had to summarize into one big idea is that we think of the the brain the neocortex is learning this model of the world but what we learned is actually there's tens of thousands of independent modeling systems going on and so each what we call a column in the cortex is about 150 000 of them is a complete modeling system so it's a collective intelligence in your head in some sense so the thousand brains theory says well where do i have knowledge about you know this coffee cup where is the model of this cell phone it's not in one place it's in thousands of separate models that are complementary and they communicate with each other through voting so this idea that we have we feel like we're one person you know that's our experience we can explain that but reality there's lots of these like almost like little brands like but they're they're sophisticated modeling systems about 150 000 of them in each of the human brain and that's a totally different way of thinking about how the neural cortex is structured than we or anyone else thought of even just five years ago so you mentioned you started this journey and just looking in the mirror trying to understand who you are so if you have many brains who are you then so it's interesting we have a singular perception right you know we think oh i'm just here i'm looking at you but it's it's composed of all these things like there's sounds and there's and there's uh this vision and there's touch and all kinds of inputs yeah we have the singular perception and what the thousand brain theory says we have these models that are visual models we have a lot of models of auditory models models of toxin models and so on but they vote and so um they send in the cortex you can think about these columns as that like little grains of rice 150 000 stacked next to each other and each one is its own little modeling system but they have these long-range connections that go between them and we call those voting connections or voting neurons um and so the different columns try to reach the consensus like what am i looking at okay you know each one has some ambiguity but they come to a consensus oh there's a water bottle i'm looking at um we are only consciously able to perceive the voting we're not able to perceive anything that goes on under the hood so the voting is what we're we're aware of the results of the vote yeah the velocity well it's it's you can imagine it this way we were just talking about eye movements a moment ago so as i'm looking at something my eyes are moving about three times a second and with each movement a completely new input is coming into the brain it's not repetitive it's not shifting it around it's completely new i'm totally unaware of it i can't perceive it but yet if i looked at the neurons in your brain they're going on and off i don't know but the voting neurons are not the voting neurons are saying you know we all agree even though i'm looking at different parts of this is a water bottle right now and that's not changing and it's in some position and and pose relative to me so i have this perception of the water bottle about two feet away from me at a certain pose to me um that is not changing that's the only part i'm aware of i can't be aware of the fact that the inputs from the eyes are moving and changing and all this others happening so these long range connections are the part we can be conscious of the individual activity in each column is doesn't go anywhere else it doesn't get shared anywhere else it doesn't there's no way to extract it and talk about it or extract it and even remember it to say oh yes i can recall that um so but these long-range connections are the things that are accessible to language and to our you know it's like the hippocampus or our memories you know our short-term memory systems and so on so we're not aware of 95 or maybe it's even 98 of what's going on in your brain we're only aware of this sort of stable somewhat stable voting outcome of all these things that are going on underneath the hood so what would you say is the basic element in the thousand brains theory of intelligence of intelligence like what's the atom of intelligence when you think about it is it the individual brains and then what is a brain well let's let's can we just talk about what intelligence is first and then and then we can talk about the elements are so in my in my book intelligence is the ability to learn a model of the world so to build internal to your head a model that represents the structure of everything you know to know what this is a table and that's a coffee cup and this is a gooseneck lamp and all this to know these things i have to have a model in my head i just don't look at them and go what is that i already have internal representations of these things in my head and i had to learn them i wasn't born of any of that knowledge you were you know we have some lights in the room here i you know that's not part of my evolutionary heritage right it's not in my genes so um we have this incredible model and the model includes not only what things look like and feel like but where they are relative to each other and how they behave i've never picked up this water bottle before but i know that if i took my hand on that blue thing and i turn it it'll probably make a funny little sound as the little plastic things detach and then it'll rotate and it'll look a certain way it'll come off how do i know that right because i have this model in my head so the essence of intelligence as our ability to learn a model and the more sophisticated our model is the smarter we are not that there is a single intelligence because you can know about you know a lot about things that i don't know and i know about things you don't know and we can both be very smart but we both learn the model of the world through interacting with it so that is the essence of intelligence then we can ask ourselves what are the mechanisms in the brain that allow us to do that and what are the mechanisms of learning not just the neural mechanisms what is the general process but how we learn a model so that was a big insight for us it's like what are the what is the actual things that how do you learn this stuff it turns out you have to learn it through movement um you can't learn it just by that's how we learn we learn through movement we learn um so you build up this model by observing things and touching them and moving them and walking around the world and so on so either you move or the thing moves somehow yeah you obviously can learn things just by reading a book something like that but think about if i were to say oh here's a new house yeah i want you to learn you know what do you do you have to walk you have to walk from room to the room you have to open the doors look around see what's on the left what's on the right as you do this you're building a model in your head it's just that's what you're doing you can't just sit there and say i'm going gonna grock the house no you know or you could you don't even want to sit there and read some description of it right yeah you literally physically interactive the same with like a smartphone if i want to learn a new app i touch it and i move things around i see what happens when i when i do things with it so that's the basic way we learn in the world and by the way when you say model you mean something that can be used for prediction in the future it's it's used for prediction and for behavior and planning right um and does a pretty good job in doing so yeah here's the way to think about the model a lot of people get hung up on this so um you can imagine an architect making a model of a house right so there's a physical model that's small and why do they do that well we do that because you can imagine what it would look like from different angles you could say okay look at them here look in there and you can also say well how how far to get from from the garage to the to the swimming pool or something like that right you can imagine looking at this you can say what would be the view from this location so we built these physical models to let you imagine the future and imagine that behaviors now we can take that same model and put it in a computer so we now today they'll build models of houses and a computer and they and they do that using a set of um we'll come back to this term in a moment reference frames but eventually you assign a reference frame for the house and you assign different things for the house in different locations and then the computer can generate an image and say okay this is what it looks like in this direction the brain is doing something remarkably similar to this surprising um it's using reference frames it's building these it's similar to a model in a computer which has the same benefits of building a physical model it allows me to say what would this thing look like if it was in this orientation what would likely happen if i push this button i've never pushed this button before or how would i accomplish something i want to i want to um convey a new idea i've learned how would i do that i can imagine in my head well i could talk about it i could write a book i could do some podcasts i could um you know maybe tell my neighbor you know and i can imagine the outcomes of all these things before i do any of them that's what the model lets you do it let's just plan the future and imagine the consequences of our actions prediction you asked about prediction prediction is not the goal of the model prediction is an inherent property of it and it's how the model corrects itself so prediction is fundamental to intelligence it's fundamental to building a model and the model's intelligent and let me go back and be very precise about this prediction you can think of prediction two ways one is like hey what would happen if i did this that's the type of prediction um that's a key part of intelligence but using predictions like oh what's this this is this water bottle gonna feel like when i pick it up you know and that doesn't seem very intelligent but the way to think one way to think about intelligence prediction is it's a way for us to learn where our model is wrong so if i picked up this water bottle and it felt hot i'd be very surprised or if i picked up was very light it would be very i'd be surprised or if i turned this top and it didn't i had to turn the other way i'd be surprised and so almost might have a prediction like okay i'm gonna do it i'll drink some water i'm okay okay do this there it is i feel opening right what if i had to turn it the other way or what if it it split in two then i say oh my gosh i i misunderstood this i didn't have the right model of this thing my attention would be drawn to i'll be looking at it going well how the hell did that happen you know why did it open up that way and i would update my model by doing it just by looking at it and playing around with that update and say this is a new type of water bottle but you so you're talking about sort of uh complicated things like a water bottle but this also applies for just basic vision just like seeing things it's almost like a precondition of just perceiving the world is predicting it's just everything that you see is first passed through your prediction everything you see and feel in fact this this is the insight i had uh back in the late 80s uh and excuse me early 80s and um another people reach the same idea is that every sensory input you get not just vision but touch and hearing you have an expectation about it and um a prediction sometimes you can pick very accurately sometimes you can't i can't predict what next word is going to come out of your mouth but as you start talking about better and better predictions and if you talk about some topics i'd be very surprised so i have this sort of background prediction that's going on all the time for all my senses again the way i think about that is this is how we learn it's it's more about how we learn it's the test of our understanding our predictions are our test did is this really a water bottle if it is i shouldn't see you know a little finger sticking out the side and if i saw a little finger stick and i was like what the hell is going on you know that's not normal um i mean that's fascinating that just let me linger on this for a second i it really honestly feels that prediction is fundamental to everything uh to the way our mind operates to intelligence so like it's just a different way to see intelligence which is like everything starts at prediction and prediction requires a model you can't predict something unless you have a model of it right but the action is prediction it's like the the thing the model does is prediction and but it also yeah and you but you can then extend it to things like uh what would happen if i took this today i went and did this what would be like that or how you can extend predictions like oh i want to get a promotion at work um what action should i take and you can say if i did this i predict what might happen if i spoke to someone i predict what might happen so it's not just low level predictions yeah it's all prediction it's all predictions like this black box so you can ask basically any question low level or highlight so we start off with that observation it's all it's like this non-stop prediction and i write about this in the book about and then we ask how do neurons actually make predictions physically like what does the neuron do when it makes a prediction and um what the neural tissue does when it makes predictions and then we ask what are the mechanisms by how we build a model that allows you to make prediction so we started with prediction as sort of the fundamental research agenda if in some sense like and say well we understand how the brain makes predictions we'll understand how it builds these models and how it learns and that's core of intelligence so it was like it was the key that got us in the door to say that is our research agenda understand predictions so in this whole process where does intelligence originate would you say so it if we look at things that are much less intelligent to humans and you start to build up a human the process of evolution where is this magic thing that uh has a prediction model or a model that's able to predict that starts to look a lot more like intelligence is there a place where richard dawkins wrote an introduction to your uh to your book an excellent introduction i mean it puts a lot of things into context and it's funny just looking at parallels for your book and darwin's origin of species so darwin wrote about the origin of species so what is the origin of intelligence well we have a theory about it and it's just that it's a theory theory goes as follows as soon as living things started to move they're not just floating in sea they're not just a plant you know grounded some place as soon as they started the move there was an advantage to moving intelligently to moving in certain ways and there's some very simple things you can do you know bacteria or single cell organisms can move towards a source of gradient of food or something like that but an animal that might know where it is and know where it's been and how to get back to that place or an animal that might say oh there was a source of food someplace how do i get to it or there was a danger how do i get to there was a mate how do i get to them um there was a big evolution advantage to that so early on there was a pressure to start understanding your environment like where am i and where have i been and what happened in those different places so we still have this neural mechanism in our brains um it's in in the in the mammals it's in the hippocampus and internal cortex these are older parts of the brain um and these are very well studied um we build a map of the of our environment so these neurons in these parts of the brain know where i am in this room and where the door was and things like that so a lot of other mammals have this all mammals have this right and almost any any animal that knows where it is and get around must have some mapping system must have some way of saying i've learned a map of my environment i have hummingbirds in my backyard and they and they go the same places all the time they have to they must know where they are they just know where they are when they're they're not just randomly flying around they know they know particular flowers they come back to so we all have this and it turns out it's very tricky to get neurons to do this to build a map of an environment it's just and so we now know there's this these famous studies that's still very active about place cells and grid cells and these other types of cells in the older parts of the brain and how they build these maps of the world it's really clever it's obviously been under a lot of evolutionary pressure over a long period of time to get good at this so animals not know where they are what we think has happened uh and there's a lot of evidence to digest this is that that mechanism we learn to map like a space is was repackaged the same type of neurons was repackaged into a more compact form and that became the cortical column and it was it was in some sense genericized if that's a word it was turned into a very specific thing about learning maps of environments to learning maps of anything learning a model of anything not just your space but coffee cups and so on and it got sort of repackaged into a more compact version a more universal version and then replicate it so the reason we're so flexible is we have a very generic version of this mapping algorithm and we have 150 000 copies of it sounds a lot like the progress of deep learning how so uh so take neural networks that seem to work well for a specific task compress them and multiply it by a lot and then you just stack them on top of it it's like the story of transformers and uh yeah but interesting networks they end up you're replicating an element but you still need the entire network to do anything right here what what's going on each individual element is a complete learning system this is why i can take a human brain cut it in half and it still works it's it's pretty amazing it's fundamentally distributed it's fundamentally distributed complete modeling systems so but that's that's our story we like to tell i i i would guess it's it's likely largely right um but you know it's there's a lot of evidence supporting that story this evolutionary story the thing which brought me to this idea is that the human brain got big very quickly so that that led to the proposal a long time ago that well there's this common element just instead of creating new things it just replicated something we also are extremely flexible we can learn things that we had no history about right and so that tells it that the learning algorithm is very generic it's very kind of universal because it's it doesn't assume any prior knowledge about what it's learning and so you combine those things together and you say okay well how did that come about where did that universal algorithm come from it had to come from something that wasn't universal it came from something that was more specific and so anyway this led to our hypothesis that you would find grid cells and place cell equivalents in the neocortex and when we first published our first papers on this theory we didn't know of evidence for that it turns out there was some but we didn't know about it uh and since then um so then we became aware of evidence for grid cells in parts of the neural cortex and then now there's been new evidence coming out there's some interesting papers that came out just january of this year so our one of our predictions was if this evolutionary hypothesis is correct we would see grid cell place cell equivalents cells that work like them through every column in the near cortex and that's starting to be seen what does it mean that uh why is it important that they're present because it tells us well we're asking about the evolutionary origin of intelligence right so our theory is that these columns in the cortex are working on the same principles they're modeling systems and it's hard to imagine how neurons do this and so we said hey it's really hard to imagine how neurons could learn these models of things we can talk about the details of that if you want but let's um but there's this other part of the brain we know that learns models of environments so could that mechanism to learn to model this room be used to learn a model the water bottle is it the same mechanism so we said it's much more likely the brain is using the same mechanism which case it would have these equivalent cell types so it's basically the whole theory is built on the idea that um these columns have reference frames and they're learning these models and these these grid cells create these reference frames so it's it's basically the major in some sense the major predictive part of this theory is that we will find these equivalent mechanisms in each column in the near cortex which tells us that's that that that's what they're doing they're learning these sensory motor models of the world so just we're pretty confident that would happen but now we're seeing the evidence so the evolutionary process nature does a lot of copy and paste and see what happens yeah yeah there's no direction to it but but um it just found out like hey if i took this these elements and and made more of them what happens and let's hook them up to the eyes and let's look up the ears and and um and that seems to work pretty well yeah like for us again just to take a quick step back to our conversation of collective intelligence do you sometimes see that as just another copy and paste aspect is copying pasting these uh brains and humans and making a lot of them and then creating uh social structures that then almost operates as a single brain uh i wouldn't have said it but you said it sounded pretty good so to you the brain is fundamental is uh is like uh is its own thing right i mean our goal is to understand how the neural cortex works we can argue how essential that is to understand a human brain because it's not the entire human brain you can argue how essential that is to understanding human intelligence you can argue how essential it is to um to uh you know a sort of communal intelligence um i i'm not i didn't our goal was to understand the neocortex yeah so what is the neural cortex and where does it fit in um the various aspects of what the brain does like how important is it to you well obviously again we i mentioned again in the beginning it's it's it's about 70 to 75 of the volume of a human brain so it's you know it dominates our brain in terms of size not in terms of number of neurons but in terms of size size isn't everything jeff i know but it's it's nothing it's nothing it's not that we know that all high-level vision hearing and touch happens in the air context we know that all language occurs and is understood in the neurocortex whether that's spoken language written language sign language with language of mathematics language of physics music math you know we know that all high-level planning and thinking occurs in the new york cortex if i were to say you know what part of your brain designed a computer and understands programming and and creates music it's all the neural cortex so then that's kind of undeniable fact uh if but then there's other parts of our brain are important too right our emotional states uh our body regulating our body um so the way i like to look at it is you know could you can you understand the neocortex about the rest of the brain and some people say you can't and i think absolutely you can it's not that they're not interacting but you can understand them can you understand the neocortex without understanding the emotions of fear yes you can you can understand how the system works it's just a modeling system i make the analogy in the book that it's it's like a map of the world and how that map is used depends on who's using it so how our map of our world in our neocortex how we how we manifest as a human depends on the rest of our brain what are our motivations you know what are my desires am i a nice guy or not a nice guy am i a cheater or a you know or not a cheater um uh you know how important different things are in my life so um so but the new projects can be understood on its own um and and i say that as a neuroscientist i know there's all these interactions and i want to say i don't know them and we don't think about them but from a layperson's point of view you can say it's a modeling system i don't tend to think too much about the communal aspect of intelligence which you brought a number of times already um so that's not really been my concern i just wonder if there's a continuum from the origin of the universe like this com pockets of complexities that form yeah living organisms i wonder if if we're just if you look at humans we feel like we're at the top but i wonder if there's like just where everybody probably every living type pocket of complexity is probably thinks they're the uh pardon the french they're the shit yeah they're they're they're at the top of the parent well if they're thinking um well then then what is thinking what the all right in this sense the whole point is in their sense of the world they their sense is that they're at the top of it i think what is it turtle but you're you're you're bringing up you know the the problems of complexity and complexity theory are you know it's a huge interesting problem in science um and you know i think we've made surprisingly little progress in understanding complex systems right in general um and so you know the santa fe institute was founded to to study this and and even the scientists there will say it's really hard we haven't really been able to figure out exactly you know that science isn't really congealed yet we're still trying to figure out the basic elements of that science uh what you know where does complexity come from and what is it and how you define it whether it's dna creating bodies or phenotypes or if it's individuals creating societies or ants and you know markets and so on it's it's a very complex thing i'm not a complexity theorist person right um and i i think they ask well the brain itself is a complex system so can we understand that um i think we've made a lot of progress understanding how the brain works so but i haven't brought it out to like oh well where are we on the complexity spectrum you know it's like um that's a great question i'd prefer for that answer to be we're not special it seems like if we're honest most likely we're not special so if there is a spectrum we're probably not in some kind of significant place there's one thing we could say that we are special and and again only here on earth i'm not saying i'm bad is that if we think about knowledge what we know um we clearly human brains have um the only brains that have a certain types of knowledge we're the only brains on on this earth to understand uh what the earth is how old it is that the universe is a picture as a whole the only organisms understand dna and the origins of you know of species uh no other species on on this planet has that knowledge so if we think about i like to think about you know one of the endeavors of humanity is to understand the universe as much as we can um i think our species is further along in that undeniably um whether our theories are right or wrong we can debate but at least we have theories you know we we know that what the sun is and how it's fusion is and how what black holes are and you know we know general theory relativity and no other animal has any of this knowledge so in that sense that we're special uh are we special in terms of the the hierarchy of complexity in in the universe probably not can we look at a neuron yeah you say that prediction happens in the neuron what does that mean so neuron traditionally seen as the basic element of the the brain so we i mentioned this earlier that prediction was our research agenda yeah we said okay um how does the brain make a prediction like i i'm about to grab this water bottle and my brain is predicting what i'm going to feel um on all my parts of my fingers if i felt something really odd on any part here i notice it so my brain is predicting what it's going to feel as i grab this thing so what is that how does that manifest itself in neural tissue right we got brains made of neurons and there's chemicals and there's neurons and there's spikes and the connect you know where where is the prediction going on and one argument could be that well when i'm predicting something um a neuron must be firing in advance it's like okay this neuron represents what you're going to feel and it's firing it's sending a spike and certainly that happens to some extent but our predictions are so ubiquitous that we're making so many of them which we're totally unaware of just the vast majority we have no idea that you're doing this um that it wasn't really we were trying to figure how could this be where where is these where are these happening right and i won't walk you through the whole story unless you insist upon it but we came to the realization that most of your predictions are occurring inside individual neurons especially these the most common are in the pyramidal cells and there are there's a property of neurons we everyone knows or most people know that a neuron is a cell and it has this spike called an action potential and it sends information but we now know that there's these spikes internal to the neuron they're called dendritic spikes they travel along the branches of the neuron and they don't leave the neuron they're just internal only there's far more dendritic spikes than there are action potentials far more they're happening all the time and what we came to understand that those dendritic spikes the ones that are occurring are actually a form of prediction they're telling the neuron the neuron is saying i expect that i might become active shortly and that internal so the internal spike is a way of saying you're going to you might be generating external spikes soon i predicted you're going to become active and and we we've we've we wrote a paper in 2016 which explained and how this manifests itself in neural tissue and how it is that this all works together but the vast ma we think it's there's a lot of evidence supporting it um so we that's where we think that most of these predictions are internal that's why you can't be per their internal neuron you can't perceive them from understanding the the prediction mechanism of a single neuron do you think there's deep insights to be gained about the prediction capabilities of the mini brains within the bigger brain and the brain oh yeah yeah yeah so having a prediction side of the individual neuron is not that useful you know what so what um the way it manifests itself in neural tissue is that when a neuron a neuron emits these spikes or a very singular type event if a neuron is predicting that it's going to be active it makes it spike very a little bit sooner just a few milliseconds sooner than it would have otherwise it's like i give the analogy in the book there's like a sprinter on a on a starting blocks in a race and if someone says get ready set you get up and you're ready to go and then when your race starts you get a little bit earlier start so that it's that that ready set is like the prediction and the neuron's like ready to go quicker and what happens is when you have a whole bunch of neurons together and they're all getting these inputs the ones that are in the predictive state the ones that are anticipating to become active if they do become active they they happen sooner they disable everything else and it leads to different representations in the brain so you have to it's not isolated just to the neuron the prediction occurs within the neuron but the network behavior changes so what happens under different predictions different inputs have different representations so how i what i predict um it's going to be different under different contexts you know what my input will be is different under different context so this is this is a key level theory how this works so the theory of the thousand brains if you were to count the number of brains how would you do it the thousand main theory says that basically every cortical column in the in your neurocortex is a complete modeling system and that when i ask where do i have a model of something like a coffee cup it's not in one of those models it's in thousands of those models there's thousands of models of coffee cups that's what the thousand brains there's a voting mechanism then there's a voting mechanism which leads which is the thing you're which you're conscious of which leads to your singular perception um that's why you perceive something so that's the thousand brains theory the details how we got to that theory um are complicated it wasn't you just thought of it one day and one of those details is we had to ask how does a a model make predictions and we've talked about just these predictive neurons that's part of this theory it's like saying oh it's a detail but it was like a crack in the doors like how are we going to figure out how these neurons build do this you know what is going on here so we just looked at prediction as like well we know that's ubiquitous we know that every part of the cortex is making predictions therefore whatever the predictive system is it's going to be everywhere we know there's a gazillion predictions happening at once so let's see if we can start teasing apart you know ask questions about you know how could neurons be making these predictions and that sort of built up to now what we have the thousand brains theory which is complex you know it's just some i can state it simply but we just didn't think of it we had to get there step by step very it took years uh to get there and where does uh reference frames fit in so yeah okay so again a reference frame i mentioned um earlier about the you know a model of a house and i said if you're going to build a model of a house in a computer they have a reference frame and you can then reference them like cartesian coordinates like x y and z axes so i can say oh i'm going to design a house i can say well the the front door is at this location xyz and the roof is at this location xyz and so on that's a type of reference frame so it turns out for you to make a prediction and then i walk you through the thought experiment in the book where i was predicting what my finger was going to feel when i touched the coffee cup it was a ceramic coffee cup but this one will do um and what i realized is that to make a prediction with my finger's going to feel like it's just going to feel different than this which would feel different if i touch the hole or the thing on the bottom make that prediction the cortex needs to know where the finger is the tip of the finger relative to the coffee cup and exactly relative to the coffee cup and to do that i have to have a reference frame for the coffee up it has to have a way of representing the location of my finger to the coffin up and then we realize of course every part of your skin has to have a reference frame relative things to touch and then we did the same thing with vision but so the idea that a reference frame is necessary to make a prediction when you're touching something or when you're seeing something and you're moving your eyes you're moving your fingers it's just a requirement to know what to predict if i have a if i have a structure i'm going to make a prediction i have to i have to know where it is i'm looking or touching it so then we say well how do neurons make reference frames it's not obvious you know xyz coordinates don't exist in the brain it's just not the way it works so that's when we looked at the older part of the brain the hippocampus and the antorano cortex where we knew that in that part of the brain there's a reference frame for a room or reference name for environment remember i talked earlier about how you could know make a map of this room so we said oh um that they are implementing reference frames there so we knew that reference frames needed to exist in every cortical column and so that was a deductive thing we just deduced it has to go so you take the old mammalian ability to know where you are in a particular space and you start applying that to higher and higher levels yeah you first you apply it to physical like where your finger is so here's what i think about it the old part of the brain says where's my body in this room yeah the new part of the brain says where's my finger relative to this this object yeah where is the a section of my retina relative to this object like where where is i'm looking at one little corner where is that relative to this patch of my retina yeah um and then we take the same thing and apply it to concepts mathematics physics you know humanity whatever you want to think eventually you're pondering your own mortality well whatever but the point is when we think about the world when we have knowledge about the world how is that knowledge organized lex where do you where is it in your head the answer is it's in reference frames so the way i learn the structure of this water bottle where the features are relative to each other when i think about history or democracy or mathematics the same basic underlying structures happening there's reference frames for where the knowledge that you're assigning things to so in the book i go through examples like mathematics and language and politics but the evidence is very clear in the neuroscience the same mechanism that we use to model this coffee cup we're going to use to model high level thoughts your your your demise of the humanity whatever you want to think about it's interesting to think about how different are the representations of those higher dimensional concepts higher level concepts how different the representation there is in terms of reference frames versus spatial but interesting thing it's it's it's a different application but it's the exact same mechanism but isn't there some aspect to uh higher level concepts that they seem to be hierarchical like they just seem to integrate a lot of information into so is our physical objects so take this water bottle uh i'm not particular to this brand but this is a fiji water bottle and it has um a logo and i use this example in my book our company's coffee cup has a logo on it but this object is hierarchical it is it's got like a cylinder and a cap but then has this logo on it and the logo has a word the word has letters the letters of different features and so i don't have to remember i don't think about this so i said oh there's a fiji logo on this water bottle i don't have to go through and say oh what is the fiji logo it's the f and i and the j and i and there's a hibiscus flower and and uh oh it has the pest you know the stamen on it i don't have to do that i just incorporate all of that in some sort of hierarchical representation i say um you know put this logo on this water bottle yeah and and and then the logo has a word and the word has letters all hierarchical just all that stuff is big it's amazing that the brain instantly just does all that yeah the idea that there's there's water it's liquid and the idea that you can uh drink it when you're thirsty the idea that there's brands yeah and then there's like all of that information is instantly like built into the whole thing once you proceed so i wanted to get back to your point about hierarchical representation the world itself is hierarchical right and i can take this microphone in front of me i know inside there's going to be some electronics i know there's going to be some wires and i know there's going to be a little diaphragm that moves back and forth i don't see that but i know it so everything in the world is hierarchical you just go into room it's composed of other components the kitchen has a refrigerator you know the refrigerator has a door the door has a hinge the hinge has screws and pin yeah i mean so anyway the the the modeling system that exists in every cortical column learns the hierarchical structure of objects so it's a very sophisticated modeling system in this grain of rice it's hard to imagine but this grain of ice can do really sophisticated things it's got a hundred thousand neurons in it it's very sophisticated so that same mechanism that can model a water bottle or a coffee cup can model conceptual objects as well it's if that's the beauty of this discovery that this guy vernon mount castle made many many years ago which is that there's there's a single cortical algorithm underlying everything we're doing so so common sense concepts and higher level concepts are all represented in the same way they're set in the same mechanisms yeah it's a little bit like computers right all computers are universal turing machines even the little teeny one that's in my toaster and the big one that's you know running some cloud server or someplace um they're all running on the same principle they can apply different things so the brain is all built on the same principle it's all about learning these models structured models using movement and reference frames and it can be applied to something as simple as a water bottle in a coffee cup and it can be just thinking like what's the future of humanity and you know why do you have a hedgehog on your on your desk i don't know nobody knows i think it's hedgehog that's right it's a hedgehog in the fog it's a russian reference does it give you any inclination or hope about how difficult it is to engineer common sense reasoning so how complicated this is this whole process so looking at the brain is this a marvel of engineering or is it pretty dumb stuff stacked on top of each other over and over can it be both can it be both right i don't know if it can be both because uh if it's an incredible engineering job that means it's v so evolution did a lot of work it uh yeah but then but then it just copied that right so as i said earlier the figuring out how to model something like a space is really hard and evolution had to go through a lot of trick and these these these cells i was talking about these grid cells and place cells they're really complicated this is not simple stuff this neural tissue works on these really unexpected weird mechanisms um but it did it it figured it out but but now you can just make lots of copies of it but then finding yeah so it's a very interesting idea that's a lot of copies of a basic mini brain but the question is how difficult it is to find that mini brain that you can copy and paste uh effectively okay today we know enough to build this i'm sitting here with you know i know the steps we have to go there's still some engineering problems to solve but we know enough and this is not like oh this is an interesting idea we have to go think about it for another few decades no we actually understand in pretty well details so not all the details but most of them so it's complicated but it is an engineering problem so in my company we are working on that we are basically a road map how we do this um it's not going to take decades it's better a few years um optimistically but i think that's possible um it's you know complex things if you understand them you can build them so in which domain do you think it's best to build them are we talking about robotics like uh entities that operate in the physical world that are able to interact with that world are we talking about entities that operate in the digital world are we talking about something more like uh more specific like is done in the uh machine learning community where you look at natural language or computer vision where do you think is easiest it's the first it's the first two more than the third one i would say um again again let's just use computers as an analogy um the pioneers of computing people like john van noyman and um turing they created this thing you know we now call the universal turing machine which is the computer right did they know how it was going to be applied where it was going to be used you know could they envision any of the future no they just said this is like a really interesting computational idea about algorithms and how you can implement them in in a machine and we're doing something similar to that today like we are we are building this sort of universal learning principle that can be applied to many many different things but the the robotics piece of that okay the interactive okay all right let's be specific you can think of this cortical column as this what we call a sensory motor learning system it has the idea that there's a sensor and then it's moving that sensor can be physical it could be like my finger and it's moving in the world it could like my eye and it's physically moving it can also be virtual so it could be um an example would be i could have a system that lives in the internet that that actually samples information on the internet and moves by following links that's that's a sensory motor system so something that echoes the the process of a finger moving along a car but in a very very loose sense it's it's like again learning is inherently about the subbing the structure in the world and discover the structure of the world you have to move through the world even if it's a virtual world even if it's a conceptual world you have to move through it you don't it doesn't exist in one it has some structure to it so here's here's a couple of predictions that getting what you're talking about in humans the same algorithm is does robotics right it moves my arms my eyes my body right um and so in my in the future to me robotics and ai will merge they're not going to be separate fields because they're going to the the the algorithms to really controlling robots are going to be the same algorithms we have in our brand the brain at these sensory motor algorithms i today we're not there but i think that's going to happen and and then so but not all ai systems will have b robotics you can have systems that have very different types of embodiments some will have physical movements some will have non-physical movements it's a very generic learning system again it's like computers the turing machine is it's like it doesn't say how it's supposed to be implemented it doesn't tell how big it is doesn't tell you what you apply it to but it's an interesting it's a computational principle cortical column equivalent is a computational principle is about learning it's about how you learn and it can be applied to a gazillion things this is what i think this is i think this impact of ai is going to be as large if not larger than computing has been in the last century by far because it's it's getting at a fundamental thing it's not a vision system or a learning system it's a it's not a vision system or a hearing system it is a learning system it's a fundamental principle how you learn the structure in the world how you can gain knowledge and be intelligent and that's what the thousand brain says what's going on and we have a particular implementation in our head but doesn't have to be like that at all do you think there's going to be some kind of impact okay let me ask it another way what do uh increasingly intelligent ai systems do with us humans in the following way like how hard is the human in the loop problem how hard is it to to interact the finger on the coffee cup equivalent of having a conversation with a human being so how hard is it to fit into our little human world uh i don't i think it's a lot of engineering problems i don't think it's a fundamental problem i could ask you the same question how hard is for computers to fit into a human world right that i mean that's essentially what i'm asking like how um much are we uh elitist are we as humans like we try to keep out uh systems i don't know i i sure i think i'm not sure that's the right question let's let's look at computers as an analogy computers are million times faster than us they do things we can't understand most people have no idea what's going on when they use computers right how do we integrate them in our society um well they're that we don't think of them as their own entities they're not living things um we don't afford them rights um we uh we rely on them our survival as a seven billion people or something like that is relying on computers now um don't you think that's a fundamental problem that we see them as something we can't we don't give rights to so computers so yeah computers so uh robots computers intelligence systems it feels like for them to operate successfully they would need to have a lot of the elements that we would start having to think about like should this entity have rights i i don't think so i i think it's tempting to think that way personally i don't think anyone hardly anyone thinks that for computers today no one says oh this thing needs a right i shouldn't be able to turn it off or you know if i throw it in the trash can you know and hit it with a sledgehammer i might perform a criminal act no no one thinks that um and now we think about intelligent machines which is where you're going um and and all of a sudden like well now we can't do that i think the basic problem we have here is that people think intelligent machines will be like us they're going to have the same emotions as we do the same feelings as we do what if i can build an intelligent machine that have absolutely could care less about whether it was on or off or destroyed or not it just doesn't care it's just like a map it's just a modeling system it has no desires to live nothing is it possible to create a system that can model the world deeply and not care about whether it lives or dies absolutely no question about it to me that's not 100 percent obvious it's obvious to me so okay we can debate it if you want yeah where does your where does your desire to live come from it's an old evolutionary design i mean we could argue does it really matter if we live or not objectively no right we're all going to die eventually um but evolution makes us want to live evolution makes us want to fight to live evolutionists want to care and love one another and to care for our children and our relatives and our family and and so on and those are all good things but they come about not because we're smart because we're animals that grew up you know the the hummingbird in my backyard cares about its offspring you know the every living thing in some sense cares about you know surviving but when we talk about creating intelligent machines we're not creating life we're not creating evolving creatures we're not creating living things we're just creating a machine that can learn really sophisticated stuff and that machine it may even be able to talk to us but it doesn't it's not going to have a desire to live unless somehow we put it into that system well there's learning right the the thing is but you don't learn to like want to live that's built into you it's wow people like ernest becker argue so okay uh there's the fact the finiteness of life the way we think about it is something we learn uh perhaps so okay yeah and some people decide they don't want to live and some people decide you know you can but the desire to live is built in dna right but i think what i'm trying to get to is uh in order to accomplish goals it's useful to have the urgency of mortality is what the stoics talked about is meditating in your mortality yeah it might be a very useful thing to do to die and have the urgency of death and to realize that to uh conceive yourself as an entity that operates in this world that eventually will no longer be a part of this world and actually conceive of yourself as a conscious entity might be very useful for you to be a system that makes sense of the world otherwise you might get lazy well okay we're going to build these machines right and so we're talking about building ais what but we're we're building the uh uh the the the equivalent of the cortical columns the uh the neocortex the neocortex and the the question is where do they arrive at because we're not hard-coding everything in where uh well well in terms of if you build the neocortex equivalent it will not have any of these desires or emotional states now you can argue that that neocortex won't be useful unless i give it some agency unless i give it some desire unless i give it some motivation otherwise you'll be as lazy and do nothing right you could argue that um but on its own it's not going to do those things it's just not it's not going to sit there and say i understand the world therefore i care to live no it's not going to do that it's just going to say i understand the world why is that obvious to you why why why don't do you think it's okay let me ask it this way do you think it's possible it will at least assign to itself agency and perceive itself in this world as being a conscious entity as a useful way to operate in the world and and to make sense of the world i think intelligent machine could be conscious but that doesn't not again imply any of these um these desires and goals and and that you're worried about it we can i have a we can talk about what it means for the machine to be conscious and by the way not worry about but get excited about it's not necessarily that we should worry about it so i think there's a legitimate problem or not problem a question asked if you build this modeling system what's it gonna model yes right what's it what's its desire what is it what's its goal what are we applying it to right so that's an interesting question um one thing if it and it depends on the application it's not something that inherent to the modeling system it's something we apply to the modeling system in a particular way so if i wanted to make a really smart car it would have to know about driving in cars and what's important in driving in cars it's not going to figure that out on its own it's not going to sit there and say you know i've understood the world and i've decided you know no no no we have to tell it we're going to have to say like so i imagine i make this car really smart it learns about your driving habits it learns about the world and it's just you know is it one day going to wake up and say you know what i'm tired of driving and doing what you want i think i have better ideas about how to spend my time well okay no it's not going to do that part of me is playing a little bit of devil's advocate but part of me is also trying to think through this because i've studied cars quite a bit and i studied pedestrians and cyclists quite a bit and there's part of me that thinks that there needs to be more intelligence than we realize in order to drive successfully that game theory of human interaction seems to require some deep understanding of of human nature that okay when a pedestrian crosses the street there's some sense they they look at a car usually and then they look away there's some sense in which they say i believe that you're not going to murder me you don't have the guts to murder me this is the little dance of pedestrian car interaction yeah is saying i'm going to look away and i'm going to put my life in your hands because i think you're human you're not gonna kill me and then the car in order to successfully operate in like manhattan streets has to say no no no i am going to kill you like a little bit there's a little bit of this weird inkling of mutual murder yeah yeah and that's a dance and then somehow successfully operate through do you think you were born of that did you learn that social interaction uh i think it might have a lot of the same elements that you're talking about which is we're leveraging things we were born with and applying them in the context that uh all right i would i would answer that i would have said that that kind of interaction is learned because you know people in different cultures have different interactions like that if you cross the street in different cities and different around the world they have different ways of interacting i would say that's learned and i would say an intelligent system could learn that too but that does not lead and the intelligent system can understand humans it could understand that you know just like i can study an animal and learn something about that animal you know i could study apes and learn something about their culture and so on i'd have to be an ape to know that um i may not be completely but i can understand something so intel's machine can model that that's this part of the world is this part of the interactions the question we're trying to get at will the intelligent machine have its own personal agency that's beyond you know what we assign to it or it's its own personal you know goals or will it evolve and create these things my confidence comes from understanding the mechanisms i'm talking about creating this is not hand wave stuff it's down in the details we i'm going to build it and i know what it's going to look like and i know it's going to behave i know what the kind of things it could do and the kind of things it can't do just like when i build a computer i know it's not going to on its own decide to put another register inside of it it can't do that no way no matter what your software does it can't add a register to the computer um so in this way when we build ai systems we have to make choices about the the the under the how we embed them so i talked about this in the book i said you know it's a brain intelligence system is not just the neocortex equivalent you have to have that but it has to have some kind of embodiment physical a virtual it has to have some sort of goals it has to have some sort of uh ideas about dangers about things it shouldn't do like you know like we we build in safeguards into systems uh we have them in our bodies we have put them into cars right you know my car follows my directions until the day it sees i'm about to hit something and it ignores my directions and puts the brakes on so we can build those things in so that's a very interesting problem um how to build those in i think my my my differing opinion about the risks of ai for most people is that people assume that somehow those things will just appear automatically it'll evolve and intelligence itself begets that stuff or requires it but it's not intelligence of the neural cortex equipment doesn't require this the new cartridge equipment just says i'm a learning system tell me what you want me to learn and i'll tell you ask me questions i'll tell you the answers but in that again it's again like a map it doesn't a map has no intent about things but you can use it um to solve problems okay so the building engineering the neural cortex in itself is just creating an intelligent prediction system modeling system sorry modeling system yeah uh you can use it to then make predictions and then but you can also put it inside a thing that's actually acting in this world you have to put it inside something it's again think of the map analogy right map on its own doesn't do anything right it's just inert it's just it can learn but it's just so we have to embed it somehow in something to do something so so what's your intuition here you had a conversation with sam harris recently that was uh sort of um you've had a bit of a disagreement and you're sticking on this point you know elon musk stuart russell kind of have us worry existential threats of ai what's your intuition why if we engineer an increasingly intelligent neural cortex type of system in the computer why that shouldn't be a thing that we it was interesting we used the word intuition and sam harris used the word intuition too and and when he used that intuition that word i immediately stopped and said oh that's the problem he's using intuition i'm not speaking about my intuition yes i'm speaking about something i understand something i'm going to build something i am building something i understand completely or at least well enough to know what it's all i'm guessing i know what this thing's going to do and i think most people who are worried they have trouble separating out they don't have they don't have the um knowledge or the understanding about like what is intelligence how is it manifest in the brain how is it separate from these other functions in the brain and so they imagine it's going to be human-like or animal-like it's going to have it's going to have the same sort of drives and emotions we have but there's no reason for that that's just because there's there's unknown if you're if the unknown is like oh my god you know i don't know what this is going to do we have to be careful it could be like us but really smarter i'm saying no it won't be like us it'll be really smart but it won't be like us at all and um and but i i'm coming from that not because i just guessing i'm not intuitive using intuition i'm basically like okay i understand this thing works this is what it does let me explain it to you okay but uh to push back so i also disagree with the the intuitions that sam has but but i also disagree with what you just said which you know what's a good uh analogy so if you look at the twitter algorithm in in the early days just recommender systems you can understand how recommender systems work what you can't understand in the early days is when you apply that recommender system at scale to thousands and millions of people how that can change societies yeah so the question is yes you're just saying this is how an engineer in neurocortex works but the ques like when you have a very useful uh tic toc type of service that goes viral when your neural cortex goes viral and then millions of people start using it cannot destroy the world no uh well first of all this is back one thing i want to say is that uh ai is a dangerous technology i don't i'm not denying that all technology is dangerous well an ai maybe particularly so yeah okay so um am i worried about it yeah i'm totally worried about it the the thing where the narrow component we're talking about now is the existential risk of ai right so i want to make that distinction because i think ai can be applied poorly it can be applied in ways that you know people are going to understand the consequences of it these are all potentially very bad things but they're not the ai system creating this existential risk on its own and that's the only place i disagree with other people right so so i i think the existential risk thing is um humans are really damn good at surviving so to kill off the human race it'd be very very difficult you can even yes but you can even i'll go further i don't think ai systems are ever going to try to i don't think ar systems are ever going to like say i'm going to ignore you i'm going to do what i think is best i don't think that's going to happen at least not in the way i'm talking about it so you the twitter recommendation algorithm this interesting example let's let's use computer as an analogy again right i build a computer it's a universal computing machine i can't predict what people are going to use it for they can build all kinds of things they can they can even create computer viruses it's you know all kinds of stuff so there's some unknown about its utility about where it's going to go but on the other hand i pointed out that once i build a computer it's not going to fundamentally change how it computes it's like i use the example of a register which is a part internal part of a computer um you know i say it can't just say because computers don't evolve they don't replicate they don't evolve they don't you know the physical manifestation of the computer itself is not gonna there's certain things it can't do right so we can break into things like things that are possible to happen we can't predict and things are just impossible to happen unless we go out of our way to make them happen they're not going to happen unless somebody makes them happen yeah so there's there's a bunch of things to say one is the physical aspect which you're absolutely right we have to build a thing for it to operate in the physical world and you can just stop building them uh you know the moment they're not doing the thing you want them to do or just change the design or change the design the question is i mean there's it's possible in the physical world this is probably longer term is you automate the building it makes it makes a lot of sense to automate the building there's a lot of factories that are doing more and more and more automation to go from raw resources to the final product it's possible to imagine that it's obviously much more efficient to keep to create a factory that's creating robots that do something you know do something extremely useful for society it could be uh personal assistance it could be uh it could be it could be your toaster but a toaster that's much has deeper knowledge of your culinary preferences yeah and and that could uh well i think now you've hit on the right thing the real thing we need to be worried about lex is self-replication right that is the thing that we're in the physical world yeah or even the virtual world self-replication because self-replication is dangerous it's probably more likely to be killed by a virus you know or a human engineered virus anybody can create you know this the technology is getting so almost anybody but not anybody but a lot of people could create a human-engineered virus that could wipe out humanity that is really dangerous no intelligence required just self-replication so um so we need to be careful about that so when i think about you know ai i'm not thinking about robots building robots don't do that don't build a you know just well that's because you're interested in creating intelligence it seems like self-replication is a good way to make a lot of money well fine but so is you know maybe editing viruses is a good way to i don't know the point is if as a society when we want to look at existential risks the existential risks we face that that we can control almost all evolve around self-replication yes the question is i don't see a good uh way to make a lot of money by engineering viruses and deploying them in the world there could be there will be applications that are useful but let's separate out let's separate out i mean you don't need to you only need some you know terrorists who wants to do it because it doesn't take a lot of money to make viruses um let's just separate out what's risky and what's not risky i'm arguing that the intelligence side of this equation is not risky it's not risky it's not risky at all it's the self-replication side of the equation is risky and i'm not dismissing that i'm scared as hell it's like the paperclip maximizer thing yeah those are often like talked about in the same conversation um i think you're right like creating ultra-intelligent super-intelligent systems is not necessarily coupled with the self-replicating arbitrarily self-replicating systems yeah and you don't get evolution unless you're self-replicating yeah and so i think that's the gist of this argument that people have trouble separating those two out they just think oh yeah intelligence is like us and look how look at the damage we've done to this planet like how we've you know destroyed all these other species yeah well we replicate we're eight billion of us are seven million of us now so um i think the idea is that the the more intelligent we're able to build systems the more tempting it becomes from a capitalist perspective of creating products the more tempting it becomes to create self uh reproduction systems all right so let's say that's true so does that mean we don't build intelligent systems no that means we regulate we we understand the risks uh we regulate them yeah uh you know look there's a lot of things we could do a society which have some sort of financial benefit to someone which could do a lot of harm and we have to learn how to regulate those things we have to learn how to deal with those things i will argue this i would say the opposite i would say having intelligent machines at our disposal will actually help us in the end more because it'll help us understand these risks better and help us mitigate these risk riders there might be ways of saying oh well how do we solve climate change problems you know how do we do this or how do we do that that just like computers are dangerous in the hands of the wrong people but they've been so great for so many other things we live with those dangers and i think we have to do the same with intelligent machines we just but we have to be constantly vigilant about this idea of a bad actors doing bad things with them and b um don't ever ever create a self-replicating system um and by the way i don't even know if you could create a self-replicating system that uses a factory that's really dangerous you know nature's way of self-replicating is so amazing um you know it doesn't require anything it just me know the thing and resources and it goes right yeah if i said to you you know what we have to build uh our goal is to build a factory that can make that builds new factories and it has to end to end supply chain it has to mine the resources get the energy i mean that's really hard it's you know no one's doing that in the next you know 100 years i've been extremely impressed by the efforts of elon musk and tesla to try to do exactly that not not from raw resource well he actually i think states the goal is to go from raw resource to the uh the final car in one factory yeah that's that's the main goal of course it's not currently possible but they're taking huge leaps well he's not the only one to do that this has been a goal for many uh industries for a long long time um it's difficult to do well a lot of people what they do is instead they have like a million suppliers and then they like there's everybody's men they all co-locate them and they tie the systems together it's it's a fundamentally distributed even i think that's that also is not getting at the issue i was just talking about um which is self-replication it's um i mean self-replication means there's no entity involved other than the entity that's replicating um right and so if there's humans in this in the loop that's not really self-replicating right it's unless somehow we're duped but it's also i i don't necessarily agree with you because you've kind of mentioned that ai will not say no to us i i just think they will yeah yeah so like uh i think it's a useful feature to build in i'm just trying to like uh put myself in the mind of engineers to sometimes say no you know if you you yeah well i gave an example earlier right i get an example of my car yeah right my car turns the wheel and and applies the accelerator and the brake as i say until it decides there's something dangerous yes and then it doesn't do that yeah now that was something it didn't decide to do is something we programmed into the car uh and so good it was a good idea right the question again isn't like if we create an intelligent system will it ever ignore our commands of course it will on sometimes is it going to do it because it came up came up with its own goals that serve its purposes and it doesn't care about our purposes no i don't think that's going to happen okay so let me ask you about these uh super intelligent cortical systems that we engineer and us humans do you think uh with these entities operating out there in the world what does the future most promising future look like is it us merging with them or is it us like how do we keep us humans around when you have increasingly intelligent beings is it uh one of the dreams is to upload our minds in the digital space so can we just give our minds to these uh systems yeah so they can operate on them is there some kind of more interesting merger or is there more more in the third part of my book i talked about all these scenarios and let me just walk through them sure um the uploading the mind one yes extremely really difficult to do like like we have no idea how to do this even remotely right now um so it would be a very long way away but i make the argument you wouldn't like the result um and you wouldn't be pleased with the result it's really not what you think it's going to be um imagine i could upload your brain into into a computer right now and now the computer's sitting there going hey i'm over here great get rid of that old bio person i don't need them you're still sitting here yeah what are you gonna do no no that's not me i'm here right yeah are you gonna feel satisfied that then you but people imagine look i'm on my deathbed and i'm about to you know expire and i push the button and now i'm uploaded but think about it a little differently and and so i don't think it's going to be a thing because people by the time we're able to do this if ever because you have to replicate the entire body not just the brain it's it's really it's i walk through the issues it's really substantial um do you have a sense of what makes us us is there is there a shortcut to what can only save a certain part that makes us truly ours no but i think that machine would feel like it's you too right right if you people just like i have a child i have a child right i have two daughters they're independent people i created them well partly yeah and um uh i don't just because they're somewhat like me i don't feel i'm them and they don't feel like i'm me so if you split it apart you have two people so we can come back to what what makes what consciousness we want we can talk about that but we don't have a remote consciousness i'm not sitting there going oh i'm conscious of that you know i mean that system over there so let's say let's let's stay on our topic okay so one was uploading a brand yep ain't gonna happen in a hundred years maybe a thousand but i don't think people are gonna wanna do it the merging your mind with uh you know the neural link thing right like again really really difficult it's it's one thing to make progress to control a prosthetic arm it's another to have like a billion or several billion you know things and understanding what those signals mean like it's the one thing they're like okay i can learn to think some patterns to make something happen it's quite another thing to have a system a computer which actually knows exactly which cells it's talking to and how it's talking to them and interacting in a way like that very very difficult we're not getting anywhere closer to that um interesting can i uh can i ask a question here what so for me what makes that merger very difficult practically in the next 10 20 50 years is like literally the biology side of it which is like it's just hard to do that kind of surgery in a safe way but your intuition is even the machine learning part of it where the machine has to learn what the heck it's talking to that's even hard i think it's even harder and it's not it's it's easy to do when you're talking about hundreds of signals it's it's a totally different thing to say you're talking about billions of signals so you don't think it's the raw it's a machine learning problem you don't think it could be learned well i'm just saying no i think you'd have to have detailed knowledge you'd have to know exactly what the types of neurons you're connecting to i mean in the brain there's these they're neurons that do all different types of things it's not like a neural network it's a very complex organism system up here we talked about the grid cells or the place cells you know you have to know what kind of cells you're talking to and what they're doing and how their timing works and all all this stuff which you can't today there's no way of doing that right but i think it's i think it's a i think the problem you're right that the biological aspect of like who wants to have surgery and have this stuff inserted in your brain that's a problem but this is when we solve that problem i think the the information coding aspect is much worse i think that's much more it's not like what they're doing today today it's simple machine learning stuff because you're doing simple things but if you want to merge your brain like i'm thinking on the internet i'm merge my brain with the machine and we're both doing i that's a totally different issue that's interesting i i tend to think if okay if you have a super clean signal from a bunch of neurons at the start you don't know what those neurons are i think that's much easier than the getting of the clean signal i think if you think about today's machine learning that's what you would conclude right i'm thinking about what's going on in the brain and i don't reach that conclusion so we'll have to see sure but i don't think even even then i think there's kind of a sad future like you know do i do i have to like plug my brain into a computer i'm still a biological organism i assume i'm still going to die so what what have i achieved right you know what have i achieved to do some sort of oh i i disagree we don't know what those are but it seems like there could be a lot of different applications it's like virtual reality is to expand your brain's capability to uh to to like to read wikipedia yeah but but fine but but you're still a biological organization yes yes you know you're still you're still mortal you're still all right so what are you accomplishing you're making your life in this short period of time better right just like uh having the internet made our life better yeah yeah okay so i i think that's of of if i think about all the possible gains we can have here that's a marginal one it's an individual hey i'm better you know i'm smarter um but you know fine i'm not against it i just don't think it's earth-changing i but so this is the true of the internet when each of us individuals are smarter we get a chance to then share our smartness we get smarter and smarter together as like as a collective this is kind of like this ant colony but why don't i just create an intelligent machine that doesn't have any of this biological nonsense this is all the same it's it's everything except don't burden it with my brain yeah right it has a brain it is smart it's like my child but it's much much smarter than me so i have a choice between doing some implant doing some hybrid weird you know biological thing that bleeding and all these problems and limited by my brain or creating a system which is super smart that i can talk to um that helps me understand the world they can read the inter you know read wikipedia and talk to me i i guess my uh the open questions there are what does the manifestation of super intelligence look like so like what are we going to you talked about why do i want to merge with ai like what what's the actual marginal benefit here if i if we have a super intelligent system yeah how will it make our life better so let's let's that's a great question but let's break it down to little pieces all right on the one hand it can make our life better in lots of simple ways you mentioned like a care robot or something that helps me do things it cooks i don't know what it does right little things like that we have soup better smarter cars we can have you know better agents and aids helping us in our work environment and things like that to me that's like the easy stuff the simple stuff in the beginning um and so in the same way that computers made our lives better in ways many many ways i will have those kind of things to me the really exciting thing about ai is sort of its transcendent transcendent quality in terms of humanity we're still biological organisms we're still stuck here on earth it's going to be hard for us to live anywhere else i don't think you and i are going to want to live on mars anytime soon and um and we're flawed you know we may end up destroying ourselves it's totally possible uh we if not completely we could destroy our civilizations you know it's let's face the fact we have issues here but we can create intelligent machines that can help us in various ways for example one example i gave another sounds a little sci-fi but i believe this if we really wanted to live on mars we'd have to have intelligent systems that go there and build the habitat for us not humans humans are never going to do this it's just too hard um but could we have a thousand or ten thousand you know engineer workers up there doing this stuff building things terraforming mars sure maybe we can move to mars but then if we want to if we want to go around the universe should i send my children around the universe or should i send some intelligent machine which is like a child that represents me and understands our needs here on earth that could travel through space so it's sort of it in some sense intelligence allows us to transcend our the limitations of our biology uh with and and don't think of it as a negative thing it's in some sense my children transcend my the my biology too because they they live beyond me yeah um and we impart they represent me and they also have their own knowledge and i can impart knowledge to them so intelligent machines will be like that too but not limited like us but the question is um there's so many ways that transcendence can happen and the merger with ai and humans is one of those ways so you said intelligent basically beings or systems propagating throughout the universe representing us humans they represent us humans in the sense they represent our knowledge and our history not us individually right right but i mean the question is is it just the database with uh with the really damn good uh model no they're conscious conscious just like us okay but just different they're different just like my children are different they're like me but they're different um these are more different i guess maybe i've already i kind of i take a very broad view of our life here on on earth i say you know why are we living here are we just living because we live is are we surviving because we can survive are we fighting just because we want to just keep going what's the point of it yeah right so to me the point if i ask myself what's the point of life is what transcends that ephemeral sort of biological experience is to me this is my answer is the acquisition of knowledge to understand more about the universe and to explore and that's partly to learn more right i don't view it as a terrible thing if the ultimate outcome of humanity is we create systems that are intelligent that are our offspring but are not like us at all and we stay we stay here and live on earth as long as we can which won't be forever but as long as we can and but that would be a great thing to do it's not a it's not like a negative thing well would uh you'd be okay then if uh the human species vanishes but our knowledge is preserved and keeps being expanded by intelligent systems i want our knowledge to be preserved and expanded yeah am i okay with humans dying no i don't want that to happen but if if if it does happen what if we we were sitting here and this is uh we're the last two people on earth we're saying lex we blew it it's all over right yeah wouldn't i feel better if i knew that our knowledge was preserved and that we had agents represent that knew about that that were trans you know they were that left earth i would want that it's better than not having that you know i make the analogy of like you know the dinosaurs the poor dinosaurs they live for you know tens of millions of years they raised their kids they you know they they fought to survive they were hungry they they they did everything we do and then they're all gone yeah like you know and and if we didn't discover their bones nobody would ever know that they ever existed right do we want to be like that i don't want to be like that but there's a sad aspect to it and it's kind of it's jarring to think about that it's possible that a human-like intelligent civilization has previously existed on earth oh yeah the reason i say this is like it is jarring to think that we would not if they weren't extinct we wouldn't be able to find evidence of them after a sufficient amount after a sufficient amount of time of course there's like look basically humans like if we destroy ourselves now human civilization destroy ourselves now after a sufficient amount of time we would not be we'd find the evidence of the dinosaurs we would not find evidence of those humans yeah that's kind of an odd thing to think about although i'm not sure if we have enough knowledge about species going back for billions of years that we could we could we might be able to eliminate that possibility but it's an interesting question of course this is a similar question to you know there were lots of intelligent species throughout the without our galaxy that have all disappeared yeah that's super sad that um there exactly that there may have been much more intelligent alien civilizations in our galaxy that are no longer there yeah um you actually talked about this um that humans might destroy ourselves yeah and how we might preserve our knowledge yeah and advertise that knowledge to other advertisers a funny word to use from a pr person there's no financial gain in this uh you know like make it like from a tourism perspective make it interesting can you describe how well there's a couple things i broke it down to the two parts actually three parts one is um you know there's a lot of things we know that what if what if we were to what if we ended up our civilization collapsed yeah i'm not talking tomorrow yeah we could be a thousand years from now alex you know we don't really know but but historically would be likely at some point time flies when you're having fun yeah that's a good way to put it um you know could we and then then intelligent life evolved again on this planet wouldn't they want to know a lot about us and what we knew when they wouldn't be able to ask us questions so one very simple thing i said how would we archive what we know that was a very simple idea i said you know what that wouldn't be that hard but a few satellites you know going around this the sun and we upload wikipedia every day and um that kind of thing uh so you know we can end up killing ourselves well it's up there and the next intelligence piece will find it and learn something that would be they would like that they would appreciate that so that's one thing the next thing i said well what if you know how to outside of our solar system we have the seti program we're looking for these intelligent signals from everybody and if you do a little bit of math which i did in the book uh and you say well what if intelligent species only live for 10 000 years before you know technologically intelligent species like ones are really able to do this we're just starting to be able to do um well the chances are we wouldn't be able to see any of them because they would have all been disappeared by now um they would they've lived for 10 000 years and now they're gone and so we're not going to find these signals being sent from these people because i said what kind of signal could you create that would last a million years or a billion years that someone would say damn it someone smart lived there we know that that would be a life-changing event for us to figure that out well what we're looking for today in the study program isn't that we're looking for very coded signals in some sense and so i asked myself what would be a different type of signal one could create i've always thought about this throughout my life and in the book i gave one one possible suggestion which was um we now detect planets going around other other suns other stars excuse me and we do that by seeing this the the slight dimming of the light as the planets move in front of them that's how we detect uh planets elsewhere in our galaxy um what if we created something like that that just rotated around our around the sun and it blocked out a little bit of light in a particular pattern that someone said hey that's not a planet that is a sign that someone was once there you can say what if it's beating up pi you know three point whatever um so the idea of a distance you can from a distance broadly broadcast takes no continue activation on our part this is the key right no one has to be seen here running a computer and supplying it with power it just goes on so we go it's continues and and i argue that part of the study program should be looking for signals like that and to look for signals like that you ought to figure out what the how would we create a signal like what would we create that would be like that that would persist for millions of years that would be broadcast broadly you could see from a distance that was unequivocal it came from an uh by an intelligent species and so i gave that one example um because they don't know what i know of actually and then and then finally right if if our ultimately our solar system will die at some point in time you know how do we go beyond that and i think it's possible if at all possible we'll have to create intelligent machines that travel throughout this throughout the the solar system or throughout the galaxy and i don't think that's going to be humans i don't think it's going to be biological organisms so these are just things to think about you know like what's the you know like i don't i don't want to be like the dinosaurs i don't want to just live and okay that was it we're done you know well there is a kind of presumption that we're going to live forever which uh i i think it is a bit sad to imagine that the message we send as as you talk about is that we were once here instead of we are here well it could be we are still here uh but it's more of a it's more of an insurance policy in case we're not here you know well i don't know but there's something i think about we as humans don't often think about this but it's like like whenever i um record a video i've done this a couple times in my life i've recorded a video for my future self just for personal just for fun and it's always just fascinating to think about that preserving yourself for future civilizations for me it was preserving myself for future me but that's a little that's a little fun example of archival these podcasts are are preserving you and i in a way yeah for future uh hopefully well after we're gone but you don't often we're sitting here talking about this you are not thinking about the fact that you and i are going to die and there will be like 10 years after somebody watching this and we're still alive you know in some sense i do i'm here because i want to talk about ideas right and these ideas transcend me and they transcend this time in on our planet um we're talking here about ideas that could be around a thousand years from now or a million years from now i when i wrote my book i had an audience of mine and one of the clearest audiences was aliens no were people reading this 100 years from now yes i said to myself how do i make this book relevant to summer reading this 100 years from now what would they want to know that we were thinking back then what would make it like that was an interesting it's still an interesting book i'm not sure i can achieve that but that was how i thought about it because these ideas like especially in the third part of the book the ones we're just talking about you know these crazy it sounds like crazy ideas about you know storing our knowledge and and you know merging our brains of computers and sending you know our machine down to space is not going to happen in my lifetime um and they may not have been happening the next 100 years it may not happen for a thousand years who knows uh but we have the unique opportunity right now we you me and other people like this um to sort of at least propose the agenda um that might impact the future like that that's a fascinating way to think uh both like writing or creating try to make try to create ideas try to create things that uh hold up in time yeah you know understanding how the brain works we're gonna figure that at once that's it it's gonna be figured out once and after that that's the answer and people will people will study that thousands of years now we still we still you know venerate newton and einstein and um and you know because because ideas are exciting even well into the future well the interesting thing is like big ideas even if they're wrong are still useful like yeah especially if they're not completely wrong like right newton's laws are not wrong they're just einsteins they're better um well it's so yeah i mean but we're talking with newton and einstein we're talking about physics i wonder if we'll ever achieve that kind of clarity but understanding um like complex systems and the this particular manifestation of complex systems which is the human brain i'm totally optimistic we can do that i mean we're making progress at it i don't see any reasons why we can't completely i mean completely understand in the sense um you know we don't really completely understand what all the molecules in this water bottle are doing but you know we have laws that sort of capture it pretty good um and uh so we'll have that kind of understanding i mean it's not like you're gonna have to know what every neuron in your brain is doing um but enough to uh first of all to build it and second of all to do you know do what physics does which is like have uh concrete experiments where we can validate we're we're this is happening right now like it's not this is not some future thing um you know i'm very optimistic about because i know about art our work and what we're doing i have to prove it to people um but um i i consider myself a rational person and um you know until fairly recently i wouldn't have said that but right now i'm where i'm sitting right now i'm saying you know we can this is going to happen there's no big obstacles to it um we finally have a framework for understanding what's going on in the cortex and um and that's liberating it's it's like oh it's happening so i i can't see why we wouldn't be able to understand it i just can't okay oh so i mean on that topic let me ask you to play devil's advocate is it possible for you to imagine luck look a hundred years from now and looking at your book uh in which ways might your ideas be wrong oh i worry about this all the time um yeah it's still useful yeah yeah i think there's you know um well i can i can best relate it to like things i'm worried about right now so we talk about this voting idea right it's happening there's no question that's happening but it could be far more there's there's enough things i don't know about it that it might be working in ways differently i'm thinking about the kind of what's voting who's voting you know where are representations i talked about you have a thousand models of a coffee cup like that that could turn out to be wrong because it may be maybe there are a thousand models that are sub models but not really a single model the coffee cup um i mean there's things these are all sort of on the edges things that i i present as like oh it's so simple and clean well it's not that it's always going to be more complex and um and there's parts of the theory which i don't understand the complexity well so i think i think the idea is brain is a distributed modeling system is not controversial at all right that's not that's well understood by many people the question then is are each quarter of a column an independent modeling system right um i could be wrong about that um i don't think so but i worry about it my intuition not even thinking why you could be wrong is the same intuition i have about any sort of physicist like strength theory that we as humans desire for a clean explanation and uh 100 years from now uh intelligent systems might look back at us and laugh at how we try to get rid of the whole mess by having simple explanation when the reality is it's it's way messier and in fact it's impossible to understand you can only build it it's like this idea of complex systems and cellular automata yeah you can only launch the thing you cannot understand it yeah i think that you know the history of science suggests that's not likely to occur the history of science suggests that look as a theorist and we're theorists you look for simple explanations right fully knowing that whatever simple explanation you're going to come up with is not going to be completely correct i mean it can't be i mean it's just it's just more complexity but that's the role of theorists play they they sort of they give you a framework on which you now can talk about a problem and figure out okay now we can start dig in more details the best frameworks stick around while the details change you know again you know the classic example is newton and einstein right you know um newton's theories are still used they're still valuable they're still practical they're not like wrong it's just they've been refined yeah but that's in physics it's not obvious by the way it's not obvious for physics either that the universe should be such that it's amenable to these simple but so far it appears to be as far as we can tell um yeah i mean but as far as we could tell and but it's also an open question whether the brain is amenable to such clean theories that's the uh not the brain but intelligence well i i i don't know i would take intelligence out of it just say you know um well okay um the evidence we have suggests that the human brain is a at the one time extremely messy and complex but there's some parts that are very regular and structured that's why we started the neocortex it's extremely regular in its structure yeah and unbelievably so and then i mentioned earlier the other thing is it's it's universal abilities it is so flexible to learn so many things we don't we haven't figured out what it can't learn yet we don't know but we haven't figured out yet but to learn things that it never was evolved to learn so those give us hope um that's why i went into this field because i said you know this regular structure it's doing this amazing number of things there's got to be some underlying principles that are that are common and other other scientists have come up with the same conclusions um and so it's promising it's promising and um and that's and whether the theories play out exactly this way or not that is the role that theorists play and so far it's worked out well even though you know maybe you know we don't understand all the laws of physics but so far it's been pretty damn useful the ones we have are our theories are pretty bit useful you mentioned that uh we should not necessarily be at least to the degree that we are worried about the existential risks of artificial intelligence relative to uh human risks from human nature being existential risk what aspect of human nature worries you the most in terms of the survival of the human species i mean i'm disappointed in humanity as humans i mean all of us i'm one so i'm at disappointed myself too it's kind of a sad state there's two things that disappoint me one is how it's difficult for us to separate our rational component of ourselves from our evolutionary heritage which is you know not always pretty you know rape is a is an evolutionary good strategy for reproduction murder can be at times too you know making other people miserable at times is a good strategy for reproduction it's just and it's just and and so now that we know that and yet we have this sort of you know we and i can have this very rational discussion talking about you know intelligence and brains and life and so on so it seems like it's so hard it's just a big transition to get humans all humans to to to make the transition from be like let's pay no attention to all that ugly stuff over here let's just focus on the instances what's unique about humanity is our knowledge and our intellect but the fact that we're striving is in itself amazing right the fact that we're able to overcome that part and it seems like we are more and more becoming successful and overcoming that that is the optimistic view and i agree with you yeah but i worry about it i'm not saying i'm worrying about i think maybe that was your question i still worry about it yes um you know we could be end tomorrow because some terrorists could get nuclear bombs and you know blow us all up who knows right the other thing i think i'm disappointed is uh and it's just i understand it it's i guess you can't really be disappointed it's just a fact is that we're so prone to false beliefs that we you know we have a model in our head the things we can interact with directly physical objects people that model is pretty good and we can test it all the time right i touch something i look at it i talk to you see my model's correct but so much of what we know is stuff i can't directly interact with i can't i don't know because someone told me about it yeah and so so we're prone inherently prone to having false beliefs because if i'm told something how am i going to know it's right or wrong right and so then we have the scientific process which says we are inherently flawed so the only way we can get closer to the truth is by looking for um contrary evidence yeah um like this uh conspiracy theory this this theory that scientists keep telling me about that the earth is round uh as far as i can tell when i look out it looks pretty flat yeah so yeah there is there's a tension but it's also um um i tend to believe that we haven't figured out most of this thing right most of nature around us is a mystery and so it um but that doesn't work does that worry you i mean it's like oh that's that's like a pleasure more to figure out right yeah that's exciting but i'm saying like there's going to be a lot of quote unquote wrong ideas i mean i've been thinking a lot about engineering systems like social networks and so on and i've been worried about censorship and thinking through all that kind of stuff because there's a lot of wrong ideas there's a lot of dangerous ideas but then i also read a history read history and see when you censor ideas that are wrong now this could be a small-scale censorship like a young grad student who comes up who like raises their hand and says some crazy idea yes a form of censorship could be i shouldn't use the word censorship but i think you may uh just like like de-incentivize them from no no no this is the way it's been yeah yeah you're you're a foolish kid don't do it yeah yeah yeah foolish uh so in some sense uh those wrong ideas most of the time end up being wrong but sometimes i agree with you so i don't like the word censorship um at the very end of the book i i ended up with a sort of a a plea or a recommended course of action and the best way i could i know how to deal with this issue that you bring up is if everybody understood as part of your upbringing in life something about how your brain works that it builds a model of the world uh how it works you know how basically builds that model the world and that the model is not the real world it's just a model and it's never going to reflect the entire world and it can be wrong and it's easy to be wrong and here's all the ways you can get the wrong model in your head right it's not prescribed what's right or wrong just understand that process if we all understood the process then i got together and you say i disagree with you jeff and i said lex i disagree with you that at least we understand that we're both trying to model something we both have different information which leads to our different models and therefore i shouldn't hold it against you and you shouldn't hold it against me and we can at least agree that well what can we look for in in its common ground to test our our beliefs as opposed to so much uh as our we raise our kids on dogma which is this is a fact and this is a fact and these people are bad and and you know where ever if everyone knew just to to be skeptical of every belief and why and how their brains do that i think we might have a better world do you think the human mind is able to comprehend reality so you talk about sort of this creating models that are better and better how close do you think we get to uh to reality there's so the wildest ideas is like donald hoffman saying we're very far away from reality uh do you think we're getting close to reality well i guess it depends on what you define reality uh we are getting we have a model of the world that's very useful right for basic well for our survival and our the pleasure whatever right um so that's useful um i mean it's really useful oh we can build planes we can build computers we can do these things right uh i don't think i i don't know the answer to that question um i think that's part of the question we're trying to figure out right like you know obviously if you end up with a theory of everything that really is a theory of everything and all of a sudden everything comes into play and there's no room for something else then you might feel like we have a good model of the world yeah but we if we have a theory of everything and somehow first of all you'll never be able to really conclusively say it's a theory of everything but say somehow we are very damn sure it's the theory of everything we understand what happened at the big bang and how just the entirety of the physical process i'm still not sure that gives us an understanding of uh the next many layers of the hierarchy yeah abstractions that form well also what if string theory turns out to be true and then you say well we have no reality no modeling what's going on in those other dimensions that are wrapped into it on each other you're right or or the multiverse you know i i honestly don't know how for us for human interaction for ideas of intelligence how it helps us to understand that we're made up of vibrating strings that are like tend to the whatever times smaller than us yeah i don't you know you could probably build better weapons and better rockets but you're not going to be able to understand intelligence i guess i guess maybe better computers no you won't be able i think it's just more purely knowledge you might lead to a better understanding of the of the beginning of the universe right it might lead to a better understanding of uh i don't know i guess i think the acquisition of knowledge has always been one where you you pursue it for its own pleasure um and you don't always know what is going to make a difference yeah you're pleasantly surprised by the the weird things you find do you think uh for the for the neocortex in general do you think there's a lot of innovation to be done on the machine side you know you use the computer as a metaphor quite a bit is there a different types of computer that would help us build i mean what are the intelligences like the manifestations of intelligent machines yeah or is it oh no it's going to be totally crazy uh we have no idea how this is going to look out yet but you can already see this today we of course remodel these things on traditional computers and now now gpus are really popular with with you know neural networks and so on um but there are companies coming up with fundamentally new physical substrates that are just really cool i don't know if they're going to work or not but i think there'll be decades of innovation here yeah totally do you think the final thing will be messy like our biology is messy or do you think um it's it's the it's the old bird versus airplane question or do you think we could just uh build airplanes yeah that that fly way better than birds in the same way we can build uh uh electrical and yeah yeah you know can i can i can i refund the bird thing a bit because i think it's interesting people ability misunderstand this the wright brothers um the problem they were trying to solve was controlled flight how to turn an airplane not how to propel an airplane they weren't worried about that interesting yeah they already had at that time there was already wing shapes which they had from studying birds there was already gliders that carry people the problem is if you put a rudder on the back of a glider and you turn it the plane falls out of the sky so the problem was how do you control flight and they studied birds and they actually had birds in captivity they watched birds in wind tunnels they observed in the wild and they discovered the secret was the birds twist their wings when they turn and so that's what they did on the wright brothers flyer they had these sticks you would twist the wing and that was that was their innovation not their propeller and today airplanes still twist their wings we don't twist the entire wing we just just the tail end of it the the the flaps which is the same thing so today's airplanes fly on the same principles as birds which is observed by so everyone get that analogy wrong but let's step back from that right once you understand the principles of flight you can choose how to implement them yeah no one's going to use bones and feathers and muscles um but they do have wings and uh we don't flap them we have propellers so when we have the principles of of computation that goes on to modeling the world in the brain we understand those principles very clearly we have choices on how to implement them and some of them will be biologically like and some won't and um but i do think there's going to be a huge amount of innovation here just think about the innovation we're in the computers they had to invent the the transistor they invented the the silicon ship they had the invent you know then this software i mean zillions of things they had to do memory systems um we're gonna do it's gonna be similar well it's interesting that the deep learning um the effectiveness of deep learning for a specific task is driving a lot of innovation in the hardware which may have effects for uh actually allowing us to discover intelligent systems that operate very differently or that's much bigger than deep learning yeah interesting so ultimately it's good to have an application that's making our life better now because the the the capitalist process if you can make money yeah yeah that works i mean the other way i mean neil degrasse tyson writes about this is the other way we fund science of course is through military so like yeah uh conquest so here here's an interesting thing we're doing on this regard so we've decided we we used to have a series these biological principles and we can see how to build these intelligent machines but we've decided to apply some of these principles to today's machine learning techniques so uh one of the we didn't talk about this principle one is uh sparsity in the brain um most of the neurons are active at any point in time as far as and the connectivity is sparse and that's different than deep learning networks um so we've already shown that we can speed up existing deep learning networks uh anywhere from 10 to a factor of 100 i mean literally 100 and make it more robust at the same time so this is commercially very very valuable um and so you know if we can prove this actually in the larger systems that are commercially applied today there's a big commercial desire to do this well sparsity is something that doesn't run really well on existing hardware it doesn't really run really well on gpus um and on cpus and so that would be a way of sort of bringing more and more brain principles into the existing system on a on a commercially valuable basis another thing we can think we can do is we're going to use the dendrites models of we i talked earlier about the the prediction occurring inside of neuron that that basic property can be applied to existing neural networks and allow them to learn continuously which something they don't do today and so yeah well we wouldn't model this spikes but the idea that you have that neuro today's neural networks have something called the point neuron which is a very simple model of a neuron and uh by adding dendrites to them with just one more level of complexity that's in biological systems you can solve problems in continuous learning um and rapid learning so we're trying to take we're trying to bring the existing field and we'll see if we can do it we're trying to bring the existing field of machine learning commercially along with us you brought up this idea of keeping you know paying for it commercially along with us as we move towards the ultimate goal of a true ai system even small innovations on neural networks are really really exciting yeah because it seems like such a trivial model of the brain and applying different insights that just even like you said continuous uh learning or uh making it more asynchronous or maybe making more dynamic or like uh incentivizing making it fast even just from robots and making it somehow much better incentivizing sparsity uh somehow yeah uh well if you can make things 100 times faster then there's plenty of incentive people people spending millions of dollars you know just training some of these networks now these uh these transformer networks let me ask you a big question how for young people uh listening to this today in high school and college what advice would you give them in terms of uh which career path to take and um maybe just about life in general well in my case um i didn't start life with any kind of goals i was when i was going to college i was like oh what did i say well maybe i'll do electrical engineering stuff you know um it wasn't like you know today you see some of these young kids are so motivated they're going to change the world i was like you know whatever and um but then i did fall in love with something besides my wife but i fell in love with this like oh my god it would be so cool to understand how the brain works and then i i said to myself that's the most important thing i could work on i i can't imagine anything more important because if we understand how brains work you'd build telescope machines and they could figure out all the other big questions of the world right so and then i said i want to understand how i work so i fell in love with this idea and i became passionate about it and this is you know a trope people say this but it was it's true because i was passionate about it i was able to put up almost so much crap you know you know i was i was in that you know i was like person said you can't do this i was i was a graduate student at berkeley when they said you can't study this problem you know no one's gonna solve this or you can't get funded for it you know then i went to do you know mobile computing and it was like people say you can't do that you can't build a cell phone you know so but all along i kept being motivated because i wanted to work on this problem i said i want to understand the brain works and if i got myself male i got one lifetime i'm gonna figure it out do the best i can so by having that because you know these it's really as you point out lex it's really hard to do these things people it's just there's so many downers along the way so many ways obstacles are getting your way yeah i'm sitting here happy all the time but trust me it's not always like that that's i guess the the happiness that the the passion is a prerequisite for surviving the whole yeah i think so i think that's right um and so i i don't want to sit to someone and say you know you need to find a passion and do it no maybe you don't but if you do find something you're passionate about then then you can follow it as far as your passion will let you put up with it do you remember how you found it this is how the spark happened why specifically for me yeah like because you said it's such an interesting so like almost like later in life by later i mean like not in when you were five yeah you you didn't really know and then all of a sudden you fell in love with that yeah yeah there was there was there's two separate events that compounded one another one when i was probably a teenager might have been 17 or 18. i made a list of the most interesting problems i could think of first was why does the universe exist it seems like not existing is more likely yeah the second one was well given exists why does it behave the way it does you know it's laws of physics y is equal to m c squared not m c cubed you know attention question i don't know third one was like what's the origin of life um and the fourth one was what's intelligence and i stopped there i said well that's probably the most interesting one and i put that aside um as a teenager but then when i was 22 and i was reading the um no it was excuse me i was 70 it was 1979 excuse me 1979 i was reading uh so i was at that time i was 22. i was reading uh the september issue of scientific american which is all about the brain and then the final essay was by francis crick who of dna fame and he had taken his interest to studying the brain now and he said you know there's something wrong here he says we got all this data oh this fact this is 1979 all these facts about the brain tons and tons of facts about the brain do we need more facts or do we just need to think about a way of rearranging the facts we have maybe we're just not thinking about the problem correctly you know because he says this shouldn't be it shouldn't be like this you know so i read that and i said wow i said i don't have to become like an experimental neuroscientist i could just look at all those facts and try to and become a theoretician and try to figure it out and i said that i felt like it was something i would be good at i said i wouldn't be a good experimentalist i don't have the patience for it but i'm a good thinker and i love puzzles and this is like the biggest puzzle in the world it's the biggest puzzle of all time and i got all the puzzle pieces in front of me damn that was exciting and there's something obviously you can't convert it towards it just kind of sparked this passion and i have that a few times in my life just something um yeah just just like you uh it grabs you yeah i thought it was something that was both important that i could make a contribution to yeah and so all of a sudden it felt like oh it gave me purpose in life yeah you know i honestly don't think it has to be as big as one of those four questions no no i think you can find those things in in the smallest oh absolutely i'm with uh david foster wallace said like the key to life is to be unborable i'm i think i think it's very possible to find that intensity of joy in the smallest absolutely i'm just you asked me my story yeah yeah i'm actually speaking to the audience yeah it doesn't have to be those four you happen to get excited by one of the bigger questions of in the universe but uh but that even the smallest things and watching the olympics now just uh just giving yourself life uh giving your life over to the study and the mastery of a particular sport is fascinating and and uh if if it sparks joy and passion you're able to in the case of the olympics basically suffer for like a couple of decades to achieve i mean you can find joint passion just being a parent i mean yeah yeah the the parenting one is funny so i always uh not always but for a long time wanted kids and get married and stuff and especially that has to do with the fact that i've seen a lot of people that i respect get a whole other level of joy from kids and you know at first is like your thinking is well like i don't have enough time in the day right if i have this passion which is true yes but like if i want to solve intelligence how is this kids situation gonna help me but then you realize that uh you know like you said the things that sparks joy and it's very possible that kids can provide even a greater or deeper more meaningful joy than those bigger questions yeah when they they enrich each other and that that seemed like um obviously when i was younger it's probably a counter-intuitive notion because there's only so many hours in the day but then life is finite and you have to pick the things that give give you joy yeah but you know also you understand you you can be patient too i mean it's finite but we do have you know whatever 50 years or so it's not so long yeah so so in my case you know in my case i had to give up on my dream of the neuroscience because i i was a graduate student at berkeley and they told me i couldn't do this and i couldn't get funded and you know and and so i went back in and went back in the computing industry for a number of years i thought it would be four but it turned out to be more but i said but i said i'll come back you know i definitely i'm definitely gonna come back i know i'm gonna do this computer stuff for a while but i'm definitely coming back everyone knows that and it's they moved like raising kids well yeah you still you have to spend a lot of time with your kids it's fun enjoyable um but that doesn't mean you have to give up on other dreams it just means that you may have to wait a week or two to work on that next idea well you talked about the the the darker side of me disappointing sides of human nature that we're hoping to overcome so that we don't destroy ourselves i tend to put a lot of value in um the broad general concept of love of uh the human capacity to um of compassion towards each other of just kindness whatever that longing of like just the human human to human connection yeah it connects back to our initial discussion i tend to see a lot of value in this collective intelligence aspect i think some of the magic of human civilization happens when there's uh a party is not as fun when it you're alone yeah i totally agree with you on these issues uh do you think from a neurocortex perspective uh what role does love play in the human condition uh well those are two separate things from a new project i don't think it doesn't impact our thinking about human uh about the neocortex from a human condition point of view i think it's core um i mean we get so much pleasure out of loving people and helping people um so you know i can i'll rack it up to old brain stuff and maybe you can throw it under the the bust of evolution if you want um that's fine um uh it doesn't impact how i think about how we model the world but from a humanity point of view i think it's essential well i tend to give it to the new brain and also i tend to think that some of aspects of that need to be engineered into ai systems both in their ability to have compassion for other humans and their ability to maximize love in the world between humans so i'm more thinking about the social network so like whenever there's a deep integration between ai systems and humans so specific applications where it's uh ai and humans i think that's something that often not talked about in terms of um metrics over which you try to maximize uh like which metric to maximize in a system it seems like one of the most powerful things in societies is the capacity to work it's fascinating i think it's it's a great way of thinking about it you know i have i have been thinking more of these fundamental mechanisms in the brain as opposed to the social interaction between the interaction between humans and ai systems in the future which is and i think if you think about that you're absolutely right um but that's that's a complex system i can have intelligent systems that don't have that component but they're not interacting with people you know they're just running something or building a building someplace or something i don't know um but if you think about interacting with humans yeah it's it's gonna and then but it has to be engineered in there i don't think it's gonna appear on its own uh that's a good question i yeah well we could in terms of uh uh from a reinforcement learning perspective whether the darker sides of human nature or the better angels of our nature uh win out yeah statistically speaking i don't know i tend to be optimistic and hope that love wins out in the end um you've done a lot of incredible stuff and your book is uh driving towards this fourth question that you started with on the nature of intelligence what do you hope your legacy for people reading a hundred years from now how do you hope they remember your work how do you hope they remember this book well i think as an entrepreneur or scientist or any human who's trying to accomplish some things i have a view that really all you can do is accelerate the inevitable um yeah it's like you know if we didn't figure out if we didn't study the brain someone else would study the brain if you know if elon just didn't make electric cars someone else would do it eventually and if you know if thomas anderson didn't invent a light bulb we wouldn't be using candles today so what you can do as an individual is you can accelerate something that's beneficial and make it happen sooner than whatever that's that's really it that's all you can do um you can't create a new reality that it wasn't gonna happen um so from that perspective um i would hope that our work not just me but our work in general um people would look back and said hey they really helped make this better future happen sooner um they you know they helped us understand the nature of false beliefs sooner than we met up they made it now we're so happy that we have these intelligent machines doing these things helping us that that maybe that solved the climate change problem and they made it happen sooner so i think that's the best i would hope for some would say those guys just moved the needle forward a little bit in time well i do it it feels like the progress of human civilization is not is uh there's a lot of trajectories and if you have individuals that accelerate towards one direction that helps steer human civilization so i think in this long stretch of time all all trajectories will be traveled but i think it's nice for this particular civilization on earth to travel down one that's not yeah well i think you're right i mean look we have the take the whole period of you know world war ii nazism or something like that well that was a bad sidestep right went over there for a while but you know there is the optimistic view about life that um that ultimately it does converge in a positive way it progresses ultimately even if we have years of darkness um so yeah so i think you can perhaps that's accelerating the positive it could also mean eliminating some bad missteps along the way too um but but i i'm an optimistic in that way i was like you know despite we talked about the end of civilization you know i i think we're gonna live for a long time i hope we are um i think our society in the future is gonna be better we're gonna have less discord we're gonna have less people killing each other you know we'll solve you know we'll make the they'll live in some sort of way that's compatible with the carrying capacity of the earth um i'm optimistic these things will happen and all we can do is try to get there sooner and at the very least if we do destroy ourselves we'll have a few satellites i will uh that will tell alien civilization that we were once or maybe our future you know future inhabitants of earth you know imagine you know the planet of the apes scenario you know we kill ourselves in a you know million years from now or billion years from now there's another species on the planet curious creatures were once here yeah um jeff thank you so much for your work and um thank you so much for talking to me once again well it's great i love what you do i love your podcast you have the most interesting people me aside so it's a real service i think you do for uh a very broader sense for humanity i think thanks jeff all right pleasure thanks for listening to this conversation with jeff hawkins and thank you to codeacademy bio optimizers expressvpn asleep and blinkist check them out in the description to support this podcast and now let me leave you with some words from albert camus an intellectual is someone whose mind watches itself i like this because i'm happy to be both haves the watcher and the watched can they be brought together this is a practical question we must try to answer thank you for listening and hope to see you next time you\n"