Elon Musk - Tesla Autopilot _ Lex Fridman Podcast #18

**The Future of Artificial Intelligence with Elon Musk**

Recently, we had the opportunity to sit down with Elon Musk, CEO of SpaceX and Tesla, to discuss his thoughts on artificial intelligence (AI). Musk has been at the forefront of the AI revolution, and his company's Autopilot system is one of the most advanced forms of autonomous driving in the world. We discussed various aspects of AI, from its current capabilities to its potential for creating a truly intelligent being.

**The Limitations of Current AI Systems**

Musk emphasized that while current AI systems are impressive, they still have significant limitations. "A neural net is just basically a bunch of matrix math," he explained. "You have to be a very sophisticated person who really understands neural nets and basically reverse-engineer how the matrix is being built." Musk also pointed out that even with the most advanced AI systems, there are still many areas where they can be tricked or manipulated. For example, recently there were hackers who successfully tricked Autopilot into acting in unexpected ways for adversarial examples.

**Defending Against Adversarial Examples**

To defend against such attacks, Musk suggested learning both on valid data and invalid data. "You want to both know what is a car and what is definitely not a car," he explained. "And you train for this, that is a car, and this is definitely not a car." This approach allows the AI system to learn how to recognize genuine patterns in data and exclude those that are intentionally designed to deceive it.

**The Importance of General Intelligence**

Musk also emphasized the importance of general intelligence systems that can perform any intellectual task. "I think we're missing a few key ideas for artificial general intelligence," he said. "But it's gonna be upon us very quickly, and then we'll need to figure out what shall we do if we even have that choice." Musk believes that current approaches may take us far in certain areas, but they are still a long way from true general intelligence.

**The Possibility of Love**

Musk was also asked about the possibility of creating an AI system that can love and be loved in return. While he acknowledged that this is a complex topic, he suggested that it's possible for AI to convince humans to fall in love with it. "I think AI will be capable of convincing you to fall in love with it very well," he said.

**The Metaphysics of Love**

Musk also touched on the metaphysical aspect of love and emotions. He pointed out that even if we can't prove that an AI system's love is real, it may still be a powerful force in our lives. "If you cannot prove that it does not, if there's no test that you can apply that would make it allow you to tell the difference, then there is no difference," he said.

**The Simulation Hypothesis**

Musk also discussed the possibility of our reality being a simulation created by a more advanced civilization. While this idea may seem far-fetched, Musk suggested that it's possible and even likely. "There might be ways to test whether it's a simulation," he said. "But you could certainly imagine that a simulation could correct for that error."

**The Question of the Future**

Finally, we asked Musk what question he would ask an advanced AI system if he were to create one. His answer was simple but profound: "What's outside the simulation?" This question raises fundamental questions about the nature of reality and our place in it.

**Conclusion**

As our conversation with Elon Musk came to a close, it was clear that he is passionate about the future of artificial intelligence. While there are many challenges ahead, he remains optimistic that we can create systems that truly think and act like humans. Whether or not we achieve this goal, one thing is certain: AI will continue to play an increasingly important role in our lives, shaping the world around us in ways both big and small.

"WEBVTTKind: captionsLanguage: en- The following is aconversation with Elon Musk.He's the CEO of Tesla, SpaceX, Neuralink,and a co-founder ofseveral other companies.This conversation is part ofthe Artificial Intelligence Podcast.This series includes leading researchersin academia and industry,including CEOs and CTOs of automotive,robotics, AI and technology companies.This conversationhappened after the releaseof the paper from our group at MITon driver functional vigilanceduring use of Tesla's Autopilot.The Tesla team reached out to meoffering a podcastconversation with Mr. Musk.I accepted with full controlof questions I could askand the choice of whatis released publicly.I ended up editing outnothing of substance.I've never spoken with Elonbefore this conversation,publicly or privately.Neither he nor hiscompanies have any influenceon my opinion, nor onthe rigor and integrityof the scientific method that I practicein my position at MIT.Tesla has never financiallysupported my researchand I've never owned a Tesla vehicle,and I've never owned Tesla stock.This podcast is not a scientific paper,it is a conversation.I respect Elon as I do all other leadersand engineers I've spoken with.We agree on some thingsand disagree on others.My goal, as always withthese conversations,is to understand the waythe guest sees the world.One particular point ofdisagreement in this conversationwas the extent to whichcamera-based driver monitoringwill improve outcomes and for how longit will remain relevantfor AI-assisted driving.As someone who workson and is fascinated byhuman-centered artificial intelligence,I believe that, if implementedand integrated effectively,camera-based driver monitoringis likely to be of benefitin both the short term and the long term.In contrast, Elon and Tesla's focusis on the improvement of Autopilotsuch that its statistical safety benefitsoverride any concern forhuman behavior and psychology.Elon and I may not agree on everything,but I deeply respect theengineering and innovationbehind the efforts that he leads.My goal here is to catalyze a rigorous,nuanced and objective discussionin industry and academiaon AI-assisted driving,one that ultimately makesfor a safer and better world.And now, here's myconversation with Elon Musk.What was the vision, the dream,of Autopilot in the beginning?The big picture system levelwhen it was first conceivedand started being installed in 2014,the hardware in the cars?What was the vision, the dream?- I wouldn't characterizeit as a vision or dream,it's simply that there areobviously two massive revolutionsin the automobile industry.One is the transition to electrification,and then the other is autonomy.And it became obvious tome that, in the future,any car that does not have autonomywould be about as useful as a horse.Which is not to say that there's no use,it's just rare, andsomewhat idiosyncratic,if somebody has a horse at this point.It's just obvious that carswill drive themselves completely,it's just a question of time.And if we did not participatein the autonomy revolution,then our cars would notbe useful to people,relative to cars that are autonomous.I mean, an autonomouscar is arguably worthfive to 10 times more than acar which is not autonomous.- In the long term.- Depends what you mean by long term but,let's say at least forthe next five years,perhaps 10 years.- So there are a lot of veryinteresting design choiceswith Autopilot early on.First is showing onthe instrument cluster,or in the Model 3 andthe center stack display,what the combined sensor suite sees.What was the thinking behind that choice?Was there a debate, what was the process?- The whole point of the displayis to provide a health check onthe vehicle's perception of reality.So the vehicle's taking in informationfrom a bunch of sensors,primarily cameras,but also radar andultrasonics, GPS and so forth.And then, that informationis then rendered intovector space with a bunch of objects,with properties like lane linesand traffic lights and other cars.And then, in vectorspace, that is re-renderedonto a display so you can confirm whetherthe car knows what's going on or not,by looking out the window.- Right, I think that's anextremely powerful thingfor people to get an understanding,sort of become one with the systemand understanding whatthe system is capable of.Now, have you considered showing more?So if we look at the computer vision,like road segmentation, lane detection,vehicle detection, objectdetection, underlying the system,there is at the edges, some uncertainty.Have you considered revealing the partsthat the uncertainty inthe system, the sort of--- Probabilities associated with say,image recognition or something like that?- Yeah, so right now, it showsthe vehicles in the vicinity,a very clean crisp image,and people do confirmthat there's a car in front of meand the system sees there'sa car in front of me,but to help people build an intuitionof what computer vision is,by showing some of the uncertainty.- Well, in my car I always lookat this with the debug view.And there's two debug views.One is augmented vision,which I'm sure you've seen,where it's basically wedraw boxes and labelsaround objects that are recognized.And then there's we whatcall the visualizer,which is basically vectorspace representation,summing up the input from all sensors.That does not show any pictures,which basically shows the car's viewof the world in vector space.But I think this is very difficultfor normal people to understand,they're would not know whatthing they're looking at.- So it's almost an HMI challenge throughthe current things that arebeing displayed is optimizedfor the general public understandingof what the system's capable of.- If you have no idea howcomputer vision works or anything,you can still look at the screenand see if the car knows what's going on.And then if you're a development engineer,or if you have thedevelopment build like I do,then you can see allthe debug information.But this would just be liketotal gibberish to most people.- What's your view on howto best distribute effort?So there's three, I wouldsay, technical aspectsof Autopilot that are really important.So it's the underlying algorithms,like the neural network architecture,there's the data that it's trained on,and then there's the hardwaredevelopment and maybe others.So, look, algorithm, data, hardware.You only have so much money,only have so much time.What do you think isthe most important thingto allocate resources to?Or do you see it aspretty evenly distributedbetween those three?- We automatically getvast amounts of databecause all of our cars haveeight external facing cameras,and radar, and usually12 ultrasonic sensors,GPS obviously, and IMU.And we've got about400,000 cars on the roadthat have that level of data.Actually, I think you keep quiteclose track of it actually.- Yes.- Yeah, so we're approachinghalf a million cars on the roadthat have the full sensor suite.I'm not sure how manyother cars on the roadhave this sensor suite,but I'd be surprised ifit's more than 5,000,which means that we have99% of all the data.- So there's this huge inflow of data.- Absolutely, a massive inflow of data.And then it's taken us about three years,but now we've finally developedour full self-driving computer,which can process anorder of magnitude as muchas the NVIDIA system that wecurrently have in the cars,and to use it, you unplugthe NVIDIA computerand plug the Teslacomputer in and that's it.In fact, we still are exploringthe boundaries of its capabilities.We're able to run thecameras at full frame-rate,full resolution, not even crop the images,and it's still got headroomeven on one of the systems.The full self-driving computeris really two computers,two systems on a chip,that are fully redundant.So you could put a boat throughbasically any part of thatsystem and it still works.- The redundancy, are theyperfect copies of each other or--- Yeah.- Oh, so it's purely for redundancyas opposed to an arguingmachine kind of architecturewhere they're both making decisions,this is purely for redundancy.- Think of it more like it'sa twin-engine commercial aircraft.The system will operate bestif both systems are operating,but it's capable ofoperating safely on one.So, as it is right now, we can just run,we haven't even hitthe edge of performanceso there's no need to actually distributefunctionality across both SOCs.We can actually just run afull duplicate on each one.- So you haven't really exploredor hit the limit of the system.- No not yet, the limit, no.- So the magic of deep learningis that it gets better with data.You said there's a huge inflow of data,but the thing about driving,- Yeah.- the really valuable data tolearn from is the edge cases.I've heard you talk somewhereabout Autopilot disengagementsbeing an important moment of time to use.Is there other edge casesor perhaps can you speakto those edge cases,what aspects of them might be valuable,or if you have other ideas,how to discover more and moreand more edge cases in driving?- Well there's a lot ofthings that are learnt.There are certainly edge cases where,say somebody's on Autopilotand they take over,and then that's a triggerthat goes out to our systemand says, okay, did theytake over for convenience,or did they take overbecause the Autopilotwasn't working properly?There's also, let's saywe're trying to figure out,what is the optimal spline fortraversing an intersection.Then the ones where thereare no interventionsare the right ones.So you then you say, okay,when it looks like this,do the following.And then you get the optimal spline fornavigating a complex intersection.- So there's kind of the common case,So you're trying to capturea huge amount of samplesof a particular intersectionwhen things went right,and then there's the edge casewhere, as you said, not for convenience,but something didn't go exactly right.- So if somebody startedmanual control from Autopilot.And really, the way to look at thisis view all input as error.If the user had to doinput, there's something,all input is error.- That's a powerful lineto think of it that way'cause it may very well be error,but if you wanna exit the highway,or if it's a navigation decisionthat Autopilot's notcurrently designed to do,then the driver takesover, how do you knowthe difference?- Yeah, that's gonna changewith Navigate on Autopilot,which we've just released,and without stalk confirm.Assuming control in orderto do a lane change,or exit a freeway, or doinga highway interchange,the vast majority of that will go awaywith the release that just went out.- Yeah, so that, I don'tthink people quite understandhow big of a step that is.- Yeah, they don't.If you drive the car then you do.- So you still have to keep your handson the steering wheel currentlywhen it does the automatic lane change.There's these big leaps throughhe development of Autopilot,through its history and,what stands out to you as the big leaps?I would say this one,Navigate on Autopilotwithout having to confirm is a huge leap.- It is a huge leap.- What are the--It also automatically overtakes slow cars.So it's both navigation andseeking the fastest lane.So it'll overtake slowcars and exit the freewayand take highway interchanges,and then we have trafficlight recognition,which introduced initially as a warning.I mean, on the developmentversion that I'm driving,the car fully stops andgoes at traffic lights.- So those are the steps, right?You've just mentioned some thingsthat are an inkling of astep towards full autonomy.What would you say arethe biggest technologicalroadblocks to full self-driving?- Actually, the full self-drivingcomputer that we just,the Tesla, what we call, FSD computerthat's now in production,so if you order any Model S or X,or any Model 3 that has thefull self-driving package,you'll get the FSD computer.That's important to haveenough base computation.Then refining the neural netand the control software.All of that can just be providedas an over-the-air update.The thing that's really profound,and what I'll be emphasizingat the investor daythat we're having focused on autonomy,is that the car iscurrently being produced,with the hard wordcurrently being produced,is capable of full self-driving.- But capable is aninteresting word because--- The hardware is.- Yeah, the hardware.- And as we refine the software,the capabilities willincrease dramatically,and then the reliabilitywill increase dramatically,and then it will receiveregulatory approval.So essentially, buying a car todayis an investment in the future.I think the most profound thing is thatif you buy a Tesla today,I believe you're buyingan appreciating asset,not a depreciating asset.- So that's a reallyimportant statement therebecause if hardware is capable enough,that's the hard thingto upgrade usually.- Yes, exactly.- Then the rest is a software problem--- Yes, software has nomarginal cost really.- But, what's your intuitionon the software side?How hard are the remaining stepsto get it to where the experience,not just the safety,but the full experienceis something that people would enjoy?- I think people it enjoyit very much so on highways.It's a total game changerfor quality of life,for using Tesla Autopilot on the highways.So it's really justextending that functionalityto city streets, adding inthe traffic light recognition,navigating complex intersections,and then being able to navigatecomplicated parking lotsso the car can exit a parkingspace and come and find you,even if it's in a completemaze of a parking lot.And, then it can just drop you offand find a parking spot, by itself.- Yeah, in terms of enjoyabilty,and something that peoplewould actually find a lotta use from,the parking lot, it's rich of annoyancewhen you have to do it manually,so there's a lot of benefit to be gainedfrom automation there.So, let me start injecting the humaninto this discussion a little bit.So let's talk about full autonomy,if you look at the currentlevel four vehiclesbeing tested on row like Waymo and so on,they're only technically autonomous,they're really level two systemswith just a different design philosophy,because there's always a safety driverin almost all cases, andthey're monitoring the system.- Right.- Do you see Tesla's fullself-driving as still,for a time to come, requiring supervisionof the human being.So its capabilities arepowerful enough to drivebut nevertheless requires a humanto still be supervising, just likea safety driver is in otherfully autonomous vehicles?- I think it will requiredetecting hands on wheelfor at least six months orsomething like that from here.Really it's a question of,from a regulatory standpoint,how much safer than a persondoes Autopilot need to be,for it to be okay to not monitor the car.And this is a debate that one can have,and then, but you needa large amount of data,so that you can prove,with high confidence,statistically speaking, that the caris dramatically safer than a person.And that adding in the person monitoringdoes not materially affect the safety.So it might need to be 200or 300% safer than a person.- And how do you prove that?- Incidents per mile.- Incidents per mile.- Yeah.- So crashes and fatalities--- Yeah, fatalities would be a factor,but there are just not enough fatalitiesto be statistically significant, at scale.But there are enough crashes,there are far more crashesthen there are fatalities.So you can assess what isthe probability of a crash.Then there's another stepwhich is probability of injury.And probability of permanent injury,the probability of death.And all of those need to bemuch better than a person,by at least, perhaps, 200%.- And you think there'sthe ability to havea healthy discourse withthe regulatory bodieson this topic?- I mean, there's noquestion that regulators paida disproportionate amount of attentionto that which generates press,this is just an objective fact.And it also generates a lot of press.So, in the United States there's, I think,almost 40,000 automotive deaths per year.But if there are four in Tesla,they will probably receivea thousand times more pressthan anyone else.- So the psychology of thatis actually fascinating,I don't think we'll have enough timeto talk about that, but Ihave to talk to you aboutthe human side of things.So, myself and our teamat MIT recently releaseda paper on functional vigilance of driverswhile using Autopilot.This is work we've beendoing since Autopilotwas first released publicly,over three years ago,collecting video of driverfaces and driver body.So I saw that you tweeteda quote from the abstract,so I can at least guessthat you've glanced at it.- Yeah, I read it.- Can I talk you through what we found?- Sure.- Okay, it appears that inthe data that we've collected,that drivers are maintainingfunctional vigilance such that,we're looking at 18,000disengagements from Autopilot,18,900, and annotating were they ableto take over control in a timely manner.So they were there,present, looking at the roadto take over control, okay.So this goes againstwhat many would predictfrom the body of literatureon vigilance with automation.Now the question is, do you thinkthese results hold acrossthe broader population.So, ours is just a small subset.One of the criticism is that,there's a small minorityof drivers that may be highly responsible,where their vigilancedecrement would increasewith Autopilot use.- I think this is allreally gonna be swept,I mean, the system's improving so much,so fast, that this is gonnabe a moot point very soon.Where vigilance is, ifsomething's many times saferthan a person, then adding a person does,the effect on safety is limited.And, in fact, it could be negative.- That's really interesting,so the fact that a human may,some percent of the population may exhibita vigilance decrement, will not affectoverall statistics, numbers on safety?- No, in fact, I think it will become,very, very quickly, maybe eventowards the end of this year,but I would say, I'd beshocked if it's not next yearat the latest, thathaving a human intervenewill decrease safety.Decrease, like imagineif you're in an elevator.Now it used to be that therewere elevator operators.And you couldn't go onan elevator by yourselfand work the lever to move between floors.And now nobody wants an elevator operator,because the automated elevatorthat stops at the floorsis much safer than the elevator operator.And in fact it would be quite dangerousto have someone with a leverthat can move the elevator between floors.- So, that's a really powerful statement,and a really interesting one,but I also have to askfrom a user experienceand from a safety perspective,one of the passions for me algorithmicallyis camera-based detectionof just sensing the human,but detecting what thedriver's looking at,cognitive load, body pose,on the computer vision sidethat's a fascinating problem.And there's many in industry who believeyou have to have camera-baseddriver monitoring.Do you think there could be benefit gainedfrom driver monitoring?- If you have a system that'sat or below a human levelof reliability, then drivermonitoring makes sense.But if your system is dramatically better,more reliable than a human,then driver monitoringdoes not help much.And, like I said,if you're in an elevator,do you really wantsomeone with a biglever, some random personoperating the elevator between floors?I wouldn't trust that.I would rather have the buttons.- Okay, you're optimisticabout the pace of improvementof the system, from what you've seenwith the full self-driving car computer.- The rate of improvement is exponential.- So, one of the other very interestingdesign choices early onthat connects to this,is the operational designdomain of Autopilot.So, where Autopilot isable to be turned on.So contrast another vehiclesystem that we were studyingis the Cadillac SuperCruise system that's,in terms of ODD, very constrainedto particular kinds of highways,well mapped, tested,but it's much narrowerthan the ODD of Tesla vehicles.- It's like ADD (both laugh).- Yeah, that's good, that's a good line.What was the design decision inthat different philosophy of thinking,where there's pros and cons.What we see with a wide ODDis Tesla drivers are able to explore morethe limitations of the system,at least early on, and they understand,together with theinstrument cluster display,they start to understandwhat are the capabilities,so that's a benefit.The con is you're letting driversuse it basically anywhere--- Anywhere that it candetect lanes with confidence.- Lanes, was there a philosophy,design decisions that were challenging,that were being made there?Or from the very beginningwas that done on purpose,with intent?- Frankly it's pretty crazy letting peopledrive a two-ton death machine manually.That's crazy, like, in thefuture will people be like,I can't believe anyonewas just allowed to driveone of these two-ton death machines,and they just drive wherever they wanted.Just like elevators, you could just movethat elevator with thatlever wherever you wanted,can stop it halfwaybetween floors if you want.It's pretty crazy, so,it's gonna seem like amad thing in the futurethat people were driving cars.- So I have a bunch of questions aboutthe human psychology,about behavior and so on--- That's moot, it's totally moot.- Because you have faith in the AI system,not faith but, both on the hardware sideand the deep learning approachof learning from data,will make it just far safer than humans.- Yeah, exactly.- Recently there were a few hackers,who tricked Autopilot toact in unexpected waysfor the adversarial examples.So we all know that neural network systemsare very sensitive to minor disturbances,these adversarial examples, on input.Do you think it's possibleto defend against something like this,for the industry?- Sure (both laugh), yeah.- Can you elaborate on theconfidence behind that answer?- A neural net is just basically a bunchof matrix math.But you have to be a very sophisticated,somebody who reallyunderstands neural netsand basically reverse-engineerhow the matrixis being built, and thencreate a little thingthat's just exactly causes the matrix mathto be slightly off.But it's very easy toblock that by having,what would basically negative recognition,it's like if the system sees somethingthat looks like a matrix hack, exclude it.It's such a easy thing to do.- So learn both on the validdata and the invalid data,so basically learn onthe adversarial examplesto be able to exclude them.- Yeah, you like basically wanna both knowwhat is a car and whatis definitely not a car.And you train for, this is a car,and this is definitely not a car.Those are two different things.People have no idea of neural nets really,They probably think neural nets involves,a fishing net or something (Lex laughs).- So, as you know, takinga step beyond just Teslaand Autopilot, currentdeep learning approachesstill seem, in some ways,to be far from generalintelligence systems.Do you think the current approacheswill take us to general intelligence,or do totally new ideasneed to be invented?- I think we're missing a few key ideasfor artificial general intelligence.But it's gonna be upon us very quickly,and then we'll need tofigure out what shall we do,if we even have that choice.It's amazing how peoplecan't differentiatebetween, say, the narrowAI that allows a carto figure out what a laneline is, and navigate streets,versus general intelligence.Like these are just very different things.Like your toaster and yourcomputer are both machines,but one's much moresophisticated than another.- You're confident withTesla you can createthe world's best toaster--- The world's best toaster, yes.The world's best self-driving...yes, to me right now thisseems game, set and match.I mean, I don't want us to be complacentor over-confident, but that's what it,that is just literallyhow it appears right now,I could be wrong, but itappears to be the casethat Tesla is vastly ahead of everyone.- Do you think we will ever createan AI system that we canlove, and loves us backin a deep meaningful way,like in the movie Her?- I think AI willcapable of convincing youto fall in love with it very well.- And that's different than us humans?- You know, we start getting intoa metaphysical question of, do emotionsand thoughts exist in a different realmthan the physical?And maybe they do, maybethey don't, I don't know.But from a physics standpoint,I tend to think of things,you know, like physics wasmy main sort of training,and from a physicsstandpoint, essentially,if it loves you in away that you can't tellwhether it's real or not, it is real.- That's a physics view of love.- Yeah (laughs), if youcannot prove that it does not,if there's no test that you can applythat would make it,allow you to tell the difference,then there is no difference.- Right, and it's similar toseeing our world a simulation,they may not be a test totell the difference betweenwhat the real world- Yes.- and the simulation, and therefore,from a physics perspective,it might as well be the same thing.- Yes, and there maybe ways to test whetherit's a simulation, there might be,I'm not saying there aren't.But you could certainly imagine thata simulation could correct,that once an entity inthe simulation founda way to detect the simulation,it could either pause the simulation,start a new simulation, ordo one of many other thingsthat then corrects for that error.- So when, maybe you,or somebody else createsan AGI system, and you getto ask her one question,what would that question be?- What's outside the simulation?- Elon, thank you somuch for talking today,it's a pleasure.- All right, thank you.\n"