The Ethics of Artificial Intelligence: A Looming Threat?
i so maybe the ethics was not so hot back then but i know for a fact like that that big uh ai companies like say deepmind they have staff that are working on the ethics and the risk of developing ai okay this is the silicon valley version of don't worry you're pretty little head about it that no one seems to notice that referencing the time horizon is a total non-sequitur okay if intelligence is just a matter of information processing yeah and we continue to improve our machines we will we will produce some form of super intelligence yeah and we have no idea how long it will take us to create the conditions to do that safely let me say that again we have no idea how long it will take us to create the conditions to do that safely and if you haven't noticed 50 years is not what it used to be and this is 50 years and months this is how long we've had the iphone this is how long the simpsons has been on television yeah look at that 50 years is not that much time to meet one of the greatest challenges our species will ever face that's true once again we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming the the computer scientist stewart russell has a nice analogy here he said imagine that we received a message from an alien civilization which read people of earth we will arrive on your planet in 50 years get ready and now we're just counting down the months until the mothership lands we would feel a little more urgency than we do another reason we're told not to worry is that these machines can't help but share our values because they will be literally extensions of ourselves they'll be grafted onto our brains so the first thing i thought of when he said that was elon musk's neural link we don't exactly know what his company is going to deliver but i know they are working on like implanting technology directly into the brain of course and will essentially become their limbic systems now take a moment to consider that the safest and only prudent path forward recommended is to implant this technology directly into our brains now that this may in fact be the safest and only prudent path forward but usually one safety concerns about a technology have to be pretty much worked out before you stick it inside your head okay so a deeper problem is that building super intelligent ai on its own seems likely to be easier than building super intelligent ai and having a completed neuroscience that allows us to seamlessly integrate our minds with it and given the companies and governments doing this work are likely to perceive themselves to be in a race against all others given that to win this race is to win the world provided you don't destroy it in the next moment and a lot of money then it seems likely that whatever is easier to do will get done first now unfortunately i don't have a solution to this problem apart from recommending that more of us think about it i think we need something like a manhattan project on the topic of artificial intelligence not to build it because i think we'll inevitably do that but to to understand how to avoid an arms race and to build it in a way that is aligned with our interests when you're talking about super intelligent ai that can make changes to itself it would seem seems that we only have one chance to get the initial conditions right and even then we will need to absorb the economic and political consequences of getting them right but the moment we admit that information processing is the source of intelligence that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously and we admit that the horizon of cognition very likely far exceeds what we currently know then we have to admit that we're in the process of building some sort of god now might be a good time to make sure it's a god we can live with thank you very much okay so i enjoy this i think it's uh an easy way to digest the way to think about the concerns about artificial general intelligence i don't fully agree with 100 of everything he says i think it's a bit how do you say it like on the edge i think it's a bit more refined at the way he puts it like the the risks and stuff but i really like the framing like of the three points that i believe intelligence is processing information and i don't think we will stop evolving our technology unless something terrible happens and i don't think we have reached peak intelligence far off to be honest so yeah i guess i am a believer that we will finally reach uh aji uh of course i don't know when i guess no one knows but as uh harris puts it uh as long as those three points are true or are met we will get there i think so okay so i hope you enjoyed this clip i did so hope i see you in the next one
"WEBVTTKind: captionsLanguage: enokay so today we are watching the ted talk from sam harris about building a super intelligent ai and the concerns around that and everything so i guess we just hit play i'm going to talk about a failure of intuition that many of us suffer from it's really a failure to detect a certain kind of danger i'm going to describe a scenario that i think is both terrifying and likely to occur and that's not a good combination as it turns out and yet rather than be scared most of you will feel that what i'm talking about is kind of cool i'm going to describe how the gains we make in artificial intelligence could ultimately destroy us but it's kind of hard to be scared of something you don't really know what is or have experience or maybe you don't even fully understand isn't it in fact i think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves and yet if you're anything like me you'll find that it's fun to think about these things and that that response is part of the problem okay that response should worry you if i were to convince you in this talk that we were likely to suffer a global famine either because of climate change or some other catastrophe and that your grandchildren or their grandchildren are very likely to live like this you wouldn't think interesting i like this ted talk okay famine isn't fun death by science fiction on the other hand is fun yeah this is the this is the typical uh media representation of ai right the killer robots are coming to get you but ex machina i love that movie and one of the things that worries me most about the development of ai at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead that i'm unable to marshal this response and i'm giving this talk it's as though we stand before two doors behind door number one we stop making progress in building intelligent machines our computer hardware and software just stops getting better for some reason now take a moment to consider why this might happen okay for me like stopping the innovation of technology is like giving up on the future of human race isn't it how else are we going to escape the sun burning up and leaving earth can't really stop technology innovation can you can maybe you can make an argument that you can stop innovating and developing ai and artificial general intelligence but not technology like general technology i don't think we can do that but given how valuable intelligence and automation are we will continue to improve our technology if we are all able to what could stop us from doing this a full-scale nuclear war i guess so a global pandemic uh no an asteroid impact justin bieber becoming president of the united states madison the point is something would have to destroy civilization as we know it okay you have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently generation after generation almost by definition this is the worst thing that's ever happened in human history so the only alternative and this is what lies behind door number two is that we continue to improve our intelligent machines yeah year after year after year at a certain point we will build machines that are smarter than we are and once we have machines that are smarter than we are they will begin to improve themselves and then we risk what the mathematician i j good called an intelligence explosion that the the process could get away from us yeah now this is often caricatured as i have here as a yeah this this is like it's exactly like it's always displayed like the killer robots are coming to get you here that armies of malicious robots will attack us but that isn't the most likely scenario it's not that our machines will become spontaneously malevolent because concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us just think about how we relate to ants okay we don't hate them we don't go out of our way to harm them in fact sometimes we take pains not to harm them we just we step over them on the sidewalk but whenever their presence seriously conflicts with one of our goals let's say when constructing a building like this one we annihilate them without a qualm okay so i think this is a good point um but it's kind of impossible imagine this situation such a reality well i'm standing here like but that doesn't mean it's not plausible but i can't visually see how it's gonna come to that point in the future but i guess the thought of this is valuable i guess that we became become the ants in the world i can't really picture it in my head the concern is that we will one day build machines that whether they're conscious or not could treat us with similar disregard now i suspect this seems far-fetched to many of you i bet there are those of you who doubt that super intelligent ai is possible much less inevitable but then you must find something wrong with one of the following assumptions and there are only three of them intelligence is a matter of information processing yeah 100 nothing can be intelligent at least in my eyes without information processing physical systems actually this is a little bit more than an assumption but we have already built narrow intelligence into our machines and many of these machines perform at a level of of super human intelligence already and we know that the matter can give rise to what is called general intelligence an ability to think flexibly across multiple domains because our brains have managed it right there's just atoms in here as long as we continue to build systems of atoms that display more and more intelligent behavior we will eventually unless we are interrupted we will eventually build general intelligence into our machines okay so what it means is that since the brain is built of atoms are machines that is built on the same building blocks as our brain can be at least or far much more intelligent because it's the same building blocks yeah so okay i guess so it's crucial to realize that the rate of progress doesn't matter it does any progress is enough to get us into the end zone we don't need moore's law to continue we don't we don't need exponential progress we just need to keep going the second assumption is that we will keep going we will continue to improve our intelligent machines yeah i fully agree as long as nothing devastating happens to the human race i can't really disagree with this point number two we will continue to improve and given the value of intelligence the intelligence is either the source of everything we value or we need it to safeguard everything we value it is our most valuable resource so we we want to do this we have problems that we desperately need to solve we want to cure diseases like alzheimer's and cancer we want to understand economic systems we want to improve our climate science so we will do this if we can the train is already out of the station and there's no break to pull finally we don't stand on a peak of intelligence we don't stand we're not near the summit of possible intelligence now i can't really disagree with that because i can't even imagine the peak of intelligence how would that even look like is artificial general intelligence the peak it's a tough question isn't it no but i don't think we are near a summit of possible intelligence or anywhere near it likely and this really is the crucial insight this is what makes our situation so precarious and this is what what makes our intuitions about risk so unreliable now just consider the smartest person who's ever lived and almost everyone's shortlist here is john von neumann the impression that von neumann made people around him and this included the greatest mathematicians and physicists of his time is fairly well documented i mean if if only half the stories about him are half true there's no question he's one of the smartest people who's ever lived so consider the spectrum of intelligence here we have john von neumann and then we have you and me yes and then we have a chicken sorry a chicken it's no reason for me to make this talk more depressing than it needs to be yeah don't forget we share more than 98 percent of our dna with jpeg it seems overwhelmingly likely however that the spectrum of intelligence extends much further than we currently conceive and if we build machines that are more intelligent than we are they will very likely explore the spectrum in ways that we can't imagine and exceed us in ways that we can't imagine and it's important to recognize that this is true by virtue of speed alone right so imagine we just built a super intelligent ai right that was no smarter than your average team of researchers at stanford or mit well electronic circuits function about a million times faster than biochemical ones yeah okay so this machine should think about a million times faster than the mines that built it so you set it running for a week and it will perform 20 000 years of human level intellectual work week after week after week how could we even understand much less constrain a mind making this sort of progress the other thing that's worrying frankly is that imagine that imagine the best case scenario so imagine we hit upon a design of super intelligent ai that has no safety concerns we have the perfect design the first time around hmm is that very likely i guess we could be to be lucky it's as though we've been handed an oracle that behaves exactly as intended well this machine would be the perfect labor saving device it can design the machine that can build the machine that can do any physical work powered by sunlight more or less for the cost of raw materials okay so so we're talking about the end of human drudgery we're also talking about the end of most intellectual work so what would apes like ourselves do in this circumstance yeah what would we do i guess we could live in like a utopia where machines produces all the food we need the water the energy and the shelter or could that be well we'd be free to play frisbee and give each other massages yeah add some lsd and some questionable wardrobe choices and the whole world could be like burning man now that might sound pretty good but ask yourself what would happen under our current economic and political order it seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before absent a willingness to immediately put this new wealth to the surface of all humanity okay well a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve and what would the russians or the chinese do if they heard that some company in silicon valley was about to deploy a super intelligent aid okay so this is something i have been thinking about like the geopolitical issue with the rise of um super intelligent ai i just wonder if i i even think this could be a reason for the third or maybe the fourth world war so if let's say if one of the nations reaches general intelligence before the other i guess the race could just immediately end like because you can use this super super general intelligent ai to just strangle all the other countries or i might be wrong but definitely i see geopolitical risk ai this machine would be capable of waging war right whether terrestrial or cyber yeah with unprecedented power that's what i mean this is a winner-take-all scenario to be six months ahead of the competition here is to be five hundred thousand years ahead at the minimum yeah okay so it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk now one of the the most frightening things in my view at this moment are the kinds of things that ai researchers say when they want to be reassuring and the most common reason we're told not to worry is time this is all a long way off don't you know this is this is probably 50 or 100 years away one researcher has said worrying about ai safety is like worrying about overpopulation on mars well this clip is actually a few years old but i so maybe the ethics was not so hot back then but i know for a fact like that that big uh ai companies like say deepmind they have staff that are working on the ethics and the risk of developing ai okay this is the silicon valley version of don't worry you're pretty little head about it that no one seems to notice that referencing the time horizon is a total non-sequitur okay if intelligence is just a matter of information processing yeah and we continue to improve our machines we will we will produce some form of super intelligence yeah and we have no idea how long it will take us to create the conditions to do that safely let me say that again we have no idea how long it will take us to create the conditions to do that safely and if you haven't noticed 50 years is not what it used to be and this is 50 years and months this is how long we've had the iphone this is how long the simpsons has been on television yeah look at that 50 years is not that much time to meet one of the greatest challenges our species will ever face that's true once again we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming the the computer scientist stewart russell has a nice analogy here he said imagine that we received a message from an alien civilization which read people of earth we will arrive on your planet in 50 years get ready and now we're just counting down the months until the mothership lands we would feel a little more urgency than we do another reason we're told not to worry is that these machines can't help but share our values because they will be literally extensions of ourselves they'll be grafted onto our brains so the first thing i thought of when he said that was elon musk's neural link we don't exactly know what his company is going to deliver but i know they are working on like implanting technology directly into the brain of course and will essentially become their limbic systems now take a moment to consider that the safest and only prudent path forward recommended is to implant this technology directly into our brains now that this may in fact be the safest and only prudent path forward but usually one safety concerns about a technology have to be pretty much worked out before you stick it inside your head okay so a deeper problem is that building super intelligent ai on its own seems likely to be easier than building super intelligent ai and having a completed neuroscience that allows us to seamlessly integrate our minds with it and given the companies and governments doing this work are likely to perceive themselves to be in a race against all others given that to win this race is to win the world provided you don't destroy it in the next moment and a lot of money then it seems likely that whatever is easier to do will get done first now unfortunately i don't have a solution to this problem apart from recommending that more of us think about it i think we need something like a manhattan project on the topic of artificial intelligence not to build it because i think we'll inevitably do that but to to understand how to avoid an arms race and to build it in a way that is aligned with our interests when you're talking about super intelligent ai that can make changes to itself it would seem seems that we only have one chance to get the initial conditions right and even then we will need to absorb the economic and political consequences of getting them right but the moment we admit that information processing is the source of intelligence that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously and we admit that the horizon of cognition very likely far exceeds what we currently know then we have to admit that we're in the process of building some sort of god now might be a good time to make sure it's a god we can live with thank you very much okay so i enjoy this i think it's uh an easy way to digest the way to think about the concerns about artificial general intelligence i don't fully agree with 100 of everything he says i think it's a bit how do you say it like on the edge i think it's a bit more refined at the way he puts it like the the risks and stuff but i really like the framing like of the three points that i believe intelligence is processing information and i don't think we will stop evolving our technology unless something terrible happens and i don't think we have reached peak intelligence far off to be honest so yeah i guess i am a believer that we will finally reach uh aji uh of course i don't know when i guess no one knows but as uh harris puts it uh as long as those three points are true or are met we will get there i think so okay so i hope you enjoyed this clip i did so hope i see you in the next oneokay so today we are watching the ted talk from sam harris about building a super intelligent ai and the concerns around that and everything so i guess we just hit play i'm going to talk about a failure of intuition that many of us suffer from it's really a failure to detect a certain kind of danger i'm going to describe a scenario that i think is both terrifying and likely to occur and that's not a good combination as it turns out and yet rather than be scared most of you will feel that what i'm talking about is kind of cool i'm going to describe how the gains we make in artificial intelligence could ultimately destroy us but it's kind of hard to be scared of something you don't really know what is or have experience or maybe you don't even fully understand isn't it in fact i think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves and yet if you're anything like me you'll find that it's fun to think about these things and that that response is part of the problem okay that response should worry you if i were to convince you in this talk that we were likely to suffer a global famine either because of climate change or some other catastrophe and that your grandchildren or their grandchildren are very likely to live like this you wouldn't think interesting i like this ted talk okay famine isn't fun death by science fiction on the other hand is fun yeah this is the this is the typical uh media representation of ai right the killer robots are coming to get you but ex machina i love that movie and one of the things that worries me most about the development of ai at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead that i'm unable to marshal this response and i'm giving this talk it's as though we stand before two doors behind door number one we stop making progress in building intelligent machines our computer hardware and software just stops getting better for some reason now take a moment to consider why this might happen okay for me like stopping the innovation of technology is like giving up on the future of human race isn't it how else are we going to escape the sun burning up and leaving earth can't really stop technology innovation can you can maybe you can make an argument that you can stop innovating and developing ai and artificial general intelligence but not technology like general technology i don't think we can do that but given how valuable intelligence and automation are we will continue to improve our technology if we are all able to what could stop us from doing this a full-scale nuclear war i guess so a global pandemic uh no an asteroid impact justin bieber becoming president of the united states madison the point is something would have to destroy civilization as we know it okay you have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently generation after generation almost by definition this is the worst thing that's ever happened in human history so the only alternative and this is what lies behind door number two is that we continue to improve our intelligent machines yeah year after year after year at a certain point we will build machines that are smarter than we are and once we have machines that are smarter than we are they will begin to improve themselves and then we risk what the mathematician i j good called an intelligence explosion that the the process could get away from us yeah now this is often caricatured as i have here as a yeah this this is like it's exactly like it's always displayed like the killer robots are coming to get you here that armies of malicious robots will attack us but that isn't the most likely scenario it's not that our machines will become spontaneously malevolent because concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us just think about how we relate to ants okay we don't hate them we don't go out of our way to harm them in fact sometimes we take pains not to harm them we just we step over them on the sidewalk but whenever their presence seriously conflicts with one of our goals let's say when constructing a building like this one we annihilate them without a qualm okay so i think this is a good point um but it's kind of impossible imagine this situation such a reality well i'm standing here like but that doesn't mean it's not plausible but i can't visually see how it's gonna come to that point in the future but i guess the thought of this is valuable i guess that we became become the ants in the world i can't really picture it in my head the concern is that we will one day build machines that whether they're conscious or not could treat us with similar disregard now i suspect this seems far-fetched to many of you i bet there are those of you who doubt that super intelligent ai is possible much less inevitable but then you must find something wrong with one of the following assumptions and there are only three of them intelligence is a matter of information processing yeah 100 nothing can be intelligent at least in my eyes without information processing physical systems actually this is a little bit more than an assumption but we have already built narrow intelligence into our machines and many of these machines perform at a level of of super human intelligence already and we know that the matter can give rise to what is called general intelligence an ability to think flexibly across multiple domains because our brains have managed it right there's just atoms in here as long as we continue to build systems of atoms that display more and more intelligent behavior we will eventually unless we are interrupted we will eventually build general intelligence into our machines okay so what it means is that since the brain is built of atoms are machines that is built on the same building blocks as our brain can be at least or far much more intelligent because it's the same building blocks yeah so okay i guess so it's crucial to realize that the rate of progress doesn't matter it does any progress is enough to get us into the end zone we don't need moore's law to continue we don't we don't need exponential progress we just need to keep going the second assumption is that we will keep going we will continue to improve our intelligent machines yeah i fully agree as long as nothing devastating happens to the human race i can't really disagree with this point number two we will continue to improve and given the value of intelligence the intelligence is either the source of everything we value or we need it to safeguard everything we value it is our most valuable resource so we we want to do this we have problems that we desperately need to solve we want to cure diseases like alzheimer's and cancer we want to understand economic systems we want to improve our climate science so we will do this if we can the train is already out of the station and there's no break to pull finally we don't stand on a peak of intelligence we don't stand we're not near the summit of possible intelligence now i can't really disagree with that because i can't even imagine the peak of intelligence how would that even look like is artificial general intelligence the peak it's a tough question isn't it no but i don't think we are near a summit of possible intelligence or anywhere near it likely and this really is the crucial insight this is what makes our situation so precarious and this is what what makes our intuitions about risk so unreliable now just consider the smartest person who's ever lived and almost everyone's shortlist here is john von neumann the impression that von neumann made people around him and this included the greatest mathematicians and physicists of his time is fairly well documented i mean if if only half the stories about him are half true there's no question he's one of the smartest people who's ever lived so consider the spectrum of intelligence here we have john von neumann and then we have you and me yes and then we have a chicken sorry a chicken it's no reason for me to make this talk more depressing than it needs to be yeah don't forget we share more than 98 percent of our dna with jpeg it seems overwhelmingly likely however that the spectrum of intelligence extends much further than we currently conceive and if we build machines that are more intelligent than we are they will very likely explore the spectrum in ways that we can't imagine and exceed us in ways that we can't imagine and it's important to recognize that this is true by virtue of speed alone right so imagine we just built a super intelligent ai right that was no smarter than your average team of researchers at stanford or mit well electronic circuits function about a million times faster than biochemical ones yeah okay so this machine should think about a million times faster than the mines that built it so you set it running for a week and it will perform 20 000 years of human level intellectual work week after week after week how could we even understand much less constrain a mind making this sort of progress the other thing that's worrying frankly is that imagine that imagine the best case scenario so imagine we hit upon a design of super intelligent ai that has no safety concerns we have the perfect design the first time around hmm is that very likely i guess we could be to be lucky it's as though we've been handed an oracle that behaves exactly as intended well this machine would be the perfect labor saving device it can design the machine that can build the machine that can do any physical work powered by sunlight more or less for the cost of raw materials okay so so we're talking about the end of human drudgery we're also talking about the end of most intellectual work so what would apes like ourselves do in this circumstance yeah what would we do i guess we could live in like a utopia where machines produces all the food we need the water the energy and the shelter or could that be well we'd be free to play frisbee and give each other massages yeah add some lsd and some questionable wardrobe choices and the whole world could be like burning man now that might sound pretty good but ask yourself what would happen under our current economic and political order it seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before absent a willingness to immediately put this new wealth to the surface of all humanity okay well a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve and what would the russians or the chinese do if they heard that some company in silicon valley was about to deploy a super intelligent aid okay so this is something i have been thinking about like the geopolitical issue with the rise of um super intelligent ai i just wonder if i i even think this could be a reason for the third or maybe the fourth world war so if let's say if one of the nations reaches general intelligence before the other i guess the race could just immediately end like because you can use this super super general intelligent ai to just strangle all the other countries or i might be wrong but definitely i see geopolitical risk ai this machine would be capable of waging war right whether terrestrial or cyber yeah with unprecedented power that's what i mean this is a winner-take-all scenario to be six months ahead of the competition here is to be five hundred thousand years ahead at the minimum yeah okay so it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk now one of the the most frightening things in my view at this moment are the kinds of things that ai researchers say when they want to be reassuring and the most common reason we're told not to worry is time this is all a long way off don't you know this is this is probably 50 or 100 years away one researcher has said worrying about ai safety is like worrying about overpopulation on mars well this clip is actually a few years old but i so maybe the ethics was not so hot back then but i know for a fact like that that big uh ai companies like say deepmind they have staff that are working on the ethics and the risk of developing ai okay this is the silicon valley version of don't worry you're pretty little head about it that no one seems to notice that referencing the time horizon is a total non-sequitur okay if intelligence is just a matter of information processing yeah and we continue to improve our machines we will we will produce some form of super intelligence yeah and we have no idea how long it will take us to create the conditions to do that safely let me say that again we have no idea how long it will take us to create the conditions to do that safely and if you haven't noticed 50 years is not what it used to be and this is 50 years and months this is how long we've had the iphone this is how long the simpsons has been on television yeah look at that 50 years is not that much time to meet one of the greatest challenges our species will ever face that's true once again we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming the the computer scientist stewart russell has a nice analogy here he said imagine that we received a message from an alien civilization which read people of earth we will arrive on your planet in 50 years get ready and now we're just counting down the months until the mothership lands we would feel a little more urgency than we do another reason we're told not to worry is that these machines can't help but share our values because they will be literally extensions of ourselves they'll be grafted onto our brains so the first thing i thought of when he said that was elon musk's neural link we don't exactly know what his company is going to deliver but i know they are working on like implanting technology directly into the brain of course and will essentially become their limbic systems now take a moment to consider that the safest and only prudent path forward recommended is to implant this technology directly into our brains now that this may in fact be the safest and only prudent path forward but usually one safety concerns about a technology have to be pretty much worked out before you stick it inside your head okay so a deeper problem is that building super intelligent ai on its own seems likely to be easier than building super intelligent ai and having a completed neuroscience that allows us to seamlessly integrate our minds with it and given the companies and governments doing this work are likely to perceive themselves to be in a race against all others given that to win this race is to win the world provided you don't destroy it in the next moment and a lot of money then it seems likely that whatever is easier to do will get done first now unfortunately i don't have a solution to this problem apart from recommending that more of us think about it i think we need something like a manhattan project on the topic of artificial intelligence not to build it because i think we'll inevitably do that but to to understand how to avoid an arms race and to build it in a way that is aligned with our interests when you're talking about super intelligent ai that can make changes to itself it would seem seems that we only have one chance to get the initial conditions right and even then we will need to absorb the economic and political consequences of getting them right but the moment we admit that information processing is the source of intelligence that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously and we admit that the horizon of cognition very likely far exceeds what we currently know then we have to admit that we're in the process of building some sort of god now might be a good time to make sure it's a god we can live with thank you very much okay so i enjoy this i think it's uh an easy way to digest the way to think about the concerns about artificial general intelligence i don't fully agree with 100 of everything he says i think it's a bit how do you say it like on the edge i think it's a bit more refined at the way he puts it like the the risks and stuff but i really like the framing like of the three points that i believe intelligence is processing information and i don't think we will stop evolving our technology unless something terrible happens and i don't think we have reached peak intelligence far off to be honest so yeah i guess i am a believer that we will finally reach uh aji uh of course i don't know when i guess no one knows but as uh harris puts it uh as long as those three points are true or are met we will get there i think so okay so i hope you enjoyed this clip i did so hope i see you in the next one\n"