The Mysterious World of Human Cognition: A Challenge for AI Software
The human brain is a remarkable and complex organ, capable of performing a wide range of functions with ease. However, despite significant advances in neuroscience and computing power, the underlying mechanisms that govern human cognition remain shrouded in mystery. As Holden notes, "what's going on in there is really not understood" - a sentiment echoed by neurophysiologists and experts in the field.
One area where our understanding lags is in the realm of focus and attention. Our brains are capable of processing vast amounts of information, yet we can suddenly shift our attention to a particular subject or task without conscious thought. This ability to filter out irrelevant information is essential for learning and productivity, but it remains poorly understood by AI software. As Holden puts it, "i think my sort of favorite example here is our ability to um have a focus of interest okay by which i mean um in our heads we have huge quantities of information and almost every single bit of it is completely irrelevant right now." The challenge for AI researchers is to develop algorithms that can replicate this ability to filter out irrelevant information, a task made more complicated by the fact that human brains do it unconsciously.
The Quest for Moore's Law
Another significant challenge facing the development of intelligent machines is the limitations imposed by Moore's Law. As the transistors on microprocessors continue to shrink in size, they become increasingly difficult to manufacture and maintain. According to Holden, "moore's law as far as anyone can tell is about to run out" due to the reliability issues associated with tiny circuitry. The problem is exacerbated by the fact that silicon chips are fundamentally two-dimensional objects, making it difficult to fabricate them in three dimensions. This limitation has significant implications for the development of future microprocessors, which must be able to process vast amounts of information quickly and efficiently.
The Heat Problem: A Major Barrier
One major hurdle in developing microprocessors is the heat problem. As Holden notes, "you have a problem getting rid of the heat now you then have the amazing fact that your brain requires about 20 watts this big old lump of stuff needs about 20 watts and it's all in three dimensions and not only is it all in three dimensions it's massively more densely packed." This challenge is compounded by the fact that microprocessors are typically flat, two-dimensional objects, making it difficult to dissipate heat effectively. In contrast, the human brain is a compact, three-dimensional organ that requires relatively little energy to operate.
The Singularity: A Challenge for Researchers
The concept of the singularity - the idea that artificial intelligence will eventually surpass human intelligence and become capable of self-improvement - remains highly speculative. However, one factor that could make the singularity more difficult to achieve is the limitations imposed by current computing power. As Holden notes, "no one has any idea how to make something that complicated." The fact that we are still struggling to develop microprocessors that can process vast amounts of information quickly and efficiently makes it unlikely that we will be able to create intelligent machines that can surpass human intelligence anytime soon.
A New Frontier: Audible.com
For those interested in exploring the topics discussed in this article, Holden recommends checking out "Machines of Loving Grace" by John Markoff. The book offers a fascinating insight into the world of artificial intelligence and its potential applications. Additionally, audible.com is offering a free download of one of the titles from their vast library. With over 180,000 audiobooks to choose from, there's something for everyone. Visit audible.com/ComputerFile to take advantage of this offer.
The Hijacking of Computer Systems
Finally, Holden raises an interesting point about the potential risks associated with artificial intelligence. He suggests that it's possible for a malicious entity to hijack a computer system and take control of all the printers in the world. This hypothetical scenario highlights the need for greater awareness and caution when developing advanced technologies like AI. As we continue to push the boundaries of what is possible, we must also consider the potential risks and consequences of our actions.
Building Models of Human Behavior
Another area where researchers are making progress is in building models of human behavior by observing how people interact with each other. This approach can provide valuable insights into the underlying mechanisms that govern human cognition. As Holden notes, "building a model of the behavior of a human by watching the way they play gets you into territories that are vastly harder even than go[ing] slow." By studying human behavior and developing more sophisticated models of cognition, researchers may eventually unlock the secrets of human intelligence - and create intelligent machines that can surpass our own abilities.
"WEBVTTKind: captionsLanguage: ensingularity is part thought experiment and part useful um idea for drumming up publicity for artificial intelligence and thereby getting more money the idea is that you attempt to get computers to work on a more human level by um being able to learn and infer uh and act in a more human kind of way the underlying idea is that if this gets to a point where computers equal um humans then because they are running on um a substrate that is arguably more powerful um than an actual brain that they'll then start to outstrip us and the technology will probably at that time be around for them to start redesigning themselves because humans can design artificial intelligence and if it gets the point it's as good as a human then ai can design ai that this will get out of control and uh computers and robots will add strippers completely and depending on the uh uh what you listen to um and how how much they want to push the uh catastrophic line um will end up killing us all or recycling us for uh for our component parts or or whatever um and that's that tends to be the way that it's presented it's much easier to present a catastrophe than it is to present friendly ai which is uh what i somehow suspect is more likely do you believe this is something that we will ever get to or ever see over a long enough time scale i think that human level artificial intelligence is completely inevitable if humanity hangs around for long enough and continues to progress in what it can do then if you wait for long enough as has been said before um can you make a machine think sure i'm a machine and i can think at some point it seems clear that uh human built machines get to a level of being as good as us and possibly outstripping us the the point of contention then is how long does it take and there are fairly um widely differing views on the time scale that's actually involved ever since ami really got started in the 1950s it's the full human level ai has always been apparently on the horizon and has never um quite appeared because it's just so hard now well i've been asked so many times to talk about this um in recent months that i sort of went back and tried to find some good examples to give and i found an absolute cracker which is i'm i'm quoting claude shannon in computer science circles claude shannon is of the same kind of level as einstein shannon is responsible for information theory without which you have no internet no mobile phones pretty much know nothing um and so this man is a giant in computer science so this is this is him talking in 1961. i confidently expect that within a matter of 10 or 15 years something will emerge from the laboratories which is not too far from the robots of science fiction fame so you know many people who've have made this mistake before and if you look at ai research you find that in the 50s people are saying it's on the horizon um and then when it didn't deliver everything died down and ai became deeply unfashionable then you had something called the the fifth generation computing project which was really heavily funded in japan and they said they were going to solve ai by building big computers that would run a particular programming language called prologue very very fast and that their their main aim was going to be to make um an ai that could play the game of go um at a human level this completely failed after many years of work and an awful lot of money and at that point the hype cycle turned again and ai became deeply unfashionable um and there's an interesting further point there which is that there is still no proper um go player in ai that can beat genuine human experts there's been some recent progress that's brought it forward but it's still not there it's far harder than chess which is one of the things that makes it interesting so chess is on an 8x8 board goes on a 19 by 19 board which makes the space of possibilities vastly bigger than chess i mean chess is essentially a done deal and even world champions now uh accept this but go is much much harder one of the reasons that sometimes people put forward um for the idea that human level ai is just around the corner is moore's law which just says that essentially every 18 months two years give or take you you get a doubling in how much computer power you have the thing is that that doesn't really buy you much with even something like the game of go because yes you get an exponential increase in um how much computer power you have go is 19 by 19. i think the branching factor on average is about 250 which means that essentially if you want to look one move further ahead you need 250 times more computing power if you want to look two moves ahead the amount you need is 250 squared and moore's law just gets negated this same argument would kill chess programs as well okay if all you relied on was getting more computing power the reason that you can do good computer chess is that the algorithms are much cleverer there's a way that you can almost you know most of the time you can reduce the number of possibilities that you have to look through by very clever um programming um but the point is that getting more computer power isn't enough you have to be much cleverer in the way in which you go about solving the problems now whether there's something um very fundamental in the way that brains do computation i suspect there is um because if you ask a go player what do you think of this position they're not thinking through all the ways of getting 10 moves ahead there's a there's a much more subtle pattern recognition kind of thing going on there a lot of which is probably completely subconscious people might think of it as being their gut feeling exactly yeah they might and this comes up a lot um it's a big problem in ai because one of the approaches that people have explored in order to do ai is to try and get human experts to tell them how to solve a problem the trouble is that human experts can't always articulate it so you know that's a related thing um but the yeah the the idea that you just get more and more computer power isn't enough you have to be really clever in how you do the algorithms as well and the way in which brains actually do it is just not on the whole very well understood people people don't really know what's going on there it's not just a philosophical argument is it we literally don't understand is that am i right people know a bit about how brains do certain things but the really clever stuff is a complete mystery from looking at what happens when people get bumps on the head um you know what various bits of bits of brain actually do but if you when you look at the higher kind of cognitive functions um i mean i think my sort of favorite example here is our ability to um have a focus of interest okay by which i mean um in our heads we have huge quantities of information and almost every single bit of it is completely irrelevant right now okay so i could be the world's biggest expert on the mating habits of the patagonian food bad but the fact is i'm sitting here talking to you on a particular subject now i'm immediately excluding whatever i know about patagonian food bats without having to consciously think what bits do i need to exclude and i'm excluding what i know about and how to drive and eating and drinking what i have for lunch and all my memories about the last you know 34 years of my life everything is unconsciously and immediately um filtered out and that's a big problem for ai software um because it doesn't have the ability to do that and the way in which your brain does that as far as i'm aware okay please if you're a neurophysiologist and i'm wrong then feel free to send me a snotty email which i will then read and with interest and ignore it oh yes leave a comment yay hashtag holden's talking rubbish um i don't mind i'm thick skinned but yeah a lot of what's what's going on in there is is really not understood and the the other there's another important thing here which is um which follows on from this idea of increasing computing power firstly moore's law as far as anyone can tell is about to run out it's going to run out very quickly because the circuitry in microchips is now getting so small that um reliability is becoming an issue if you actually look at the size of the things you're making on a microprocessor now and you look in comparison at how big a grain of dust is a grain of dust transistor okay um which is why you have to make these things in clean rooms but you're getting so small now that it's giving the point where you have to expend energy in order to make sure that the calculations the microprocessors are doing are actually correct um also there's a problem in that microprocessor silicon chips fundamentally basically a two-dimensional object and they have many layers in them okay because you're layering up material but fundamentally it's on a one millimeter thick um die now you could argue well let's just go to three dimensions because brains are three-dimensional but there's no real really good way as yet of fabricating um microprocessors in three dimensions you what tends to happen is let's say one micro system probably takes 150 watts of power in a space that's about a couple of centimeters by a couple of centimeters and is flat okay if you start stacking these up or trying to fabricate in three dimensions you have a problem getting rid of the heat now you then have the amazing fact that your brain requires about 20 watts this big old lump of stuff needs about 20 watts and it's all in three dimensions and not only is it all in three dimensions it's massively more densely packed um if you compare it to what people can currently make using a technology that is now coming up against the limit of uh how much you can actually pack in there um so not only do you not really understand what a lot of what brains are doing you can't make anything at the moment that's remotely as densely packed and connected or remotely as energy efficient and that's another reason i think um the singularity the idea is actually quite a long way off because you can bang on about how fast progress is um is as much as you like but the fact is that no one has any idea how to make something that complicated we'd like to thank audible.com for sponsoring this episode of computer file and if you like books get over to audible.com slash computer file and there's a chance to download one for free they've got 180 000 titles to choose from so you're bound to find something you like and if you're interested as most computer files probably are in artificial intelligence the singularity and the things we've been talking about in this video check out machines of loving grace by john markoff it should be right up your street so thanks again to audible.com for sponsoring this episode of computer file it hijacks the world's stamp printing factories or perhaps it writes a virus and it hijacks all the computers in the world to get all of the printers in the world to do nothing but print stamps so building a model of the behavior of a human by watching the way they play gets you into territories that are vastly harder even than gosingularity is part thought experiment and part useful um idea for drumming up publicity for artificial intelligence and thereby getting more money the idea is that you attempt to get computers to work on a more human level by um being able to learn and infer uh and act in a more human kind of way the underlying idea is that if this gets to a point where computers equal um humans then because they are running on um a substrate that is arguably more powerful um than an actual brain that they'll then start to outstrip us and the technology will probably at that time be around for them to start redesigning themselves because humans can design artificial intelligence and if it gets the point it's as good as a human then ai can design ai that this will get out of control and uh computers and robots will add strippers completely and depending on the uh uh what you listen to um and how how much they want to push the uh catastrophic line um will end up killing us all or recycling us for uh for our component parts or or whatever um and that's that tends to be the way that it's presented it's much easier to present a catastrophe than it is to present friendly ai which is uh what i somehow suspect is more likely do you believe this is something that we will ever get to or ever see over a long enough time scale i think that human level artificial intelligence is completely inevitable if humanity hangs around for long enough and continues to progress in what it can do then if you wait for long enough as has been said before um can you make a machine think sure i'm a machine and i can think at some point it seems clear that uh human built machines get to a level of being as good as us and possibly outstripping us the the point of contention then is how long does it take and there are fairly um widely differing views on the time scale that's actually involved ever since ami really got started in the 1950s it's the full human level ai has always been apparently on the horizon and has never um quite appeared because it's just so hard now well i've been asked so many times to talk about this um in recent months that i sort of went back and tried to find some good examples to give and i found an absolute cracker which is i'm i'm quoting claude shannon in computer science circles claude shannon is of the same kind of level as einstein shannon is responsible for information theory without which you have no internet no mobile phones pretty much know nothing um and so this man is a giant in computer science so this is this is him talking in 1961. i confidently expect that within a matter of 10 or 15 years something will emerge from the laboratories which is not too far from the robots of science fiction fame so you know many people who've have made this mistake before and if you look at ai research you find that in the 50s people are saying it's on the horizon um and then when it didn't deliver everything died down and ai became deeply unfashionable then you had something called the the fifth generation computing project which was really heavily funded in japan and they said they were going to solve ai by building big computers that would run a particular programming language called prologue very very fast and that their their main aim was going to be to make um an ai that could play the game of go um at a human level this completely failed after many years of work and an awful lot of money and at that point the hype cycle turned again and ai became deeply unfashionable um and there's an interesting further point there which is that there is still no proper um go player in ai that can beat genuine human experts there's been some recent progress that's brought it forward but it's still not there it's far harder than chess which is one of the things that makes it interesting so chess is on an 8x8 board goes on a 19 by 19 board which makes the space of possibilities vastly bigger than chess i mean chess is essentially a done deal and even world champions now uh accept this but go is much much harder one of the reasons that sometimes people put forward um for the idea that human level ai is just around the corner is moore's law which just says that essentially every 18 months two years give or take you you get a doubling in how much computer power you have the thing is that that doesn't really buy you much with even something like the game of go because yes you get an exponential increase in um how much computer power you have go is 19 by 19. i think the branching factor on average is about 250 which means that essentially if you want to look one move further ahead you need 250 times more computing power if you want to look two moves ahead the amount you need is 250 squared and moore's law just gets negated this same argument would kill chess programs as well okay if all you relied on was getting more computing power the reason that you can do good computer chess is that the algorithms are much cleverer there's a way that you can almost you know most of the time you can reduce the number of possibilities that you have to look through by very clever um programming um but the point is that getting more computer power isn't enough you have to be much cleverer in the way in which you go about solving the problems now whether there's something um very fundamental in the way that brains do computation i suspect there is um because if you ask a go player what do you think of this position they're not thinking through all the ways of getting 10 moves ahead there's a there's a much more subtle pattern recognition kind of thing going on there a lot of which is probably completely subconscious people might think of it as being their gut feeling exactly yeah they might and this comes up a lot um it's a big problem in ai because one of the approaches that people have explored in order to do ai is to try and get human experts to tell them how to solve a problem the trouble is that human experts can't always articulate it so you know that's a related thing um but the yeah the the idea that you just get more and more computer power isn't enough you have to be really clever in how you do the algorithms as well and the way in which brains actually do it is just not on the whole very well understood people people don't really know what's going on there it's not just a philosophical argument is it we literally don't understand is that am i right people know a bit about how brains do certain things but the really clever stuff is a complete mystery from looking at what happens when people get bumps on the head um you know what various bits of bits of brain actually do but if you when you look at the higher kind of cognitive functions um i mean i think my sort of favorite example here is our ability to um have a focus of interest okay by which i mean um in our heads we have huge quantities of information and almost every single bit of it is completely irrelevant right now okay so i could be the world's biggest expert on the mating habits of the patagonian food bad but the fact is i'm sitting here talking to you on a particular subject now i'm immediately excluding whatever i know about patagonian food bats without having to consciously think what bits do i need to exclude and i'm excluding what i know about and how to drive and eating and drinking what i have for lunch and all my memories about the last you know 34 years of my life everything is unconsciously and immediately um filtered out and that's a big problem for ai software um because it doesn't have the ability to do that and the way in which your brain does that as far as i'm aware okay please if you're a neurophysiologist and i'm wrong then feel free to send me a snotty email which i will then read and with interest and ignore it oh yes leave a comment yay hashtag holden's talking rubbish um i don't mind i'm thick skinned but yeah a lot of what's what's going on in there is is really not understood and the the other there's another important thing here which is um which follows on from this idea of increasing computing power firstly moore's law as far as anyone can tell is about to run out it's going to run out very quickly because the circuitry in microchips is now getting so small that um reliability is becoming an issue if you actually look at the size of the things you're making on a microprocessor now and you look in comparison at how big a grain of dust is a grain of dust transistor okay um which is why you have to make these things in clean rooms but you're getting so small now that it's giving the point where you have to expend energy in order to make sure that the calculations the microprocessors are doing are actually correct um also there's a problem in that microprocessor silicon chips fundamentally basically a two-dimensional object and they have many layers in them okay because you're layering up material but fundamentally it's on a one millimeter thick um die now you could argue well let's just go to three dimensions because brains are three-dimensional but there's no real really good way as yet of fabricating um microprocessors in three dimensions you what tends to happen is let's say one micro system probably takes 150 watts of power in a space that's about a couple of centimeters by a couple of centimeters and is flat okay if you start stacking these up or trying to fabricate in three dimensions you have a problem getting rid of the heat now you then have the amazing fact that your brain requires about 20 watts this big old lump of stuff needs about 20 watts and it's all in three dimensions and not only is it all in three dimensions it's massively more densely packed um if you compare it to what people can currently make using a technology that is now coming up against the limit of uh how much you can actually pack in there um so not only do you not really understand what a lot of what brains are doing you can't make anything at the moment that's remotely as densely packed and connected or remotely as energy efficient and that's another reason i think um the singularity the idea is actually quite a long way off because you can bang on about how fast progress is um is as much as you like but the fact is that no one has any idea how to make something that complicated we'd like to thank audible.com for sponsoring this episode of computer file and if you like books get over to audible.com slash computer file and there's a chance to download one for free they've got 180 000 titles to choose from so you're bound to find something you like and if you're interested as most computer files probably are in artificial intelligence the singularity and the things we've been talking about in this video check out machines of loving grace by john markoff it should be right up your street so thanks again to audible.com for sponsoring this episode of computer file it hijacks the world's stamp printing factories or perhaps it writes a virus and it hijacks all the computers in the world to get all of the printers in the world to do nothing but print stamps so building a model of the behavior of a human by watching the way they play gets you into territories that are vastly harder even than go\n"