Why This Tesla is STILL Gonna Hit This Truck

Tesla's Self-Driving Technology: A Game Changer or a Waste of Time?

Tesla has been working on its self-driving technology, and what they've achieved is nothing short of remarkable. They've created an overturned truck identification program that is unparalleled in sophistication. However, some other folks working on self-driving technology see this as a huge waste of time because a ton of that information can be filled in by lidar instead. Lidar, or light detection and ranging, uses lasers to scan the environment around a vehicle and construct a 3D model of it with incredible accuracy.

Car makers like Google believe that lidar is the way forward for self-driving cars. They see visual systems as having problems with the kind of data they deliver. Too much of it is simply irrelevant. An autonomous car doesn't need to know what color an overturned truck is, because hitting a red truck is just as bad as hitting a blue one. However, things like color data are part of the information being sent from the camera and have to be handled by the image processor, which can sometimes make its job harder than it needs to be.

But lidar proponents argue that it's just one part of a system that cross-references with others like cameras and radar to give the most complete and efficient picture of the world. They see visual systems as being limited in their ability to distinguish between similar objects, such as curbs and lane lines. A car with lidar uses lasers to scan the environment around it and constructs a 3D model of the environment complete with distance and direction. With lidar, you don't need a complicated piece of curb identification software that you have to teach to interpret visual data to distinguish between curbs and lane lines.

Lidar technology is also susceptible to weather and interference, which makes it unsuitable for a level five car that needs to be able to go anywhere regardless of conditions. However, researchers at the University of Texas have developed lidar technology using something called an avalanche diode, which massively increases lidar sensitivity while simultaneously decreasing the size needed to make a sensitive lidar detector and removing the need for bulky, cooling devices.

But what might actually bring down the cost of lidar technology is a big company integrating it into their cars. Tesla has partnered with Luminar for testing and developing purposes, and that could be the key to making lidar more accessible to other car makers. Elon Musk has even softened his stance on lidar, acknowledging that much of what he said before was just talking smack. It seems that Tesla is coming around to the idea of using lidar technology in their cars.

In fact, a Tesla with factory plates was spotted testing a lidar system in Florida recently, and the system was from a company called Luminar. This development has left many wondering if Tesla's self-driving technology is finally on its way to becoming a reality. With lidar technology on board, Tesla may finally be able to offer its cars as level five vehicles, capable of operating safely and efficiently in any condition.

As we look to the future of self-driving cars, it's clear that lidar technology is going to play a major role. While visual systems have their limitations, lidar provides a level of accuracy and reliability that can't be matched by cameras alone. Whether or not Tesla decides to stick with its original plan of relying solely on visual data remains to be seen, but one thing is certain - the game of self-driving technology has just gotten a whole lot more interesting.

Follow us here on Instagram @donutmedia. Follow me on Instagram @jeremiahburton. Follow me on Tik Tok @silenceofthelambda. Until next week, bye for now.

WEBVTTKind: captionsLanguage: en- Tesla has made significant updatesto their self-driving systemsand Elon Musk, says thattheir latest versionwill be capable of fullautonomy by the end of 2021.They better hurry up though,cause that's like in a month.And that's a pretty bold claim,especially because 18 months ago,a model three on autopilot collidedwith an overturned truck.That's not something anyfully autonomous vehicleshould ever be doing.Less you want robotsto take over the world.Now we made an entire episodeabout why that happenedand why lidar, a laserbased detection systemthat Musk called, unnecessary, expensiveand doomed, might have preventedthat collision all along.So in a surprise move,Tesla has been recentlyexperimenting with lidar.So today, we're going to find out,is Tesla going back on their word?Are they going to stickto using all cameras?Or are they going tobring lidar into the mix?Or maybe, just maybe, Musk isexaggerating the whole thing.Wouldn't be the first time,I bought a bunch of Dogecoinand it ruined me. (laughing)(instrumental music)When Musk says the latest versionof Tesla's self-driving systemsare capable of full autonomy,what he means is level five autonomy.Unlike your buddy's stage threeNos energy drink dispenser.This number actually means something.Now, technically they'resix levels cause zero countsand zero is a number andwhen you do 1, 2, 3, 4, 5,and then add on zero, it's another number.So there's six, there's six total levelswith zero being the lowest level.Zero through five is six levels.I know math is hard. Ifthis is tough for you,go watch Money Pit.Now level zero provides simple warningsor small momentaryassistance to the driver.Things like blind spotwarning or emergency braking,they're level zero. Levelone and two are pretty muchthe same just with increasing complexity.So they have assist like lane centering,adaptive cruise controland crash avoidance.Up to this point, thehuman behind the wheelis still the one incontrol of the vehicle.And they're responsible formonitoring their surrounding.Once you hit level threeor higher up, that flips,and it's the car doing the driving.This is like sci-fi stuff,like driverless taxisthat don't even have steering wheels.And at the top of the scaleis a full level five autonomy.A car that can completelydrive itself from any locationto any other locationwithout human interventionor supervision.The last stage, level fiveis what Musk claims Tesla'swill be capable of very soon.And some people are skeptical,including Tesla's own engineers.Their director of autopilotsoftware even toldthe California departmentof motor vehiclesthat Musk's claim does notmatch engineering reality.Now, given the current stateof driverless technology,it seems like we might bea long way from level five.Now for a car to be fullylevel five autonomous,able to drive itselfwith no human interactionat a minimum, it to be able to recognizeand avoid hazardous objectslike overturned trucks.Now, in that previousepisode I talked about,the model three, it failed to do that.And we explained four systemsthat can detect potential hazards.Sonar, radar, cameras and lidar.Each of these systems hasstrengths and weaknesses,but when it comes to fullautonomy, sonar and radar alonedon't provide enough information.See sonar uses sound wavesand works great for stuffthat's nearby, but it hasa relatively short range,about 10 meters. Actuallygets less accurateas distance increasesand then the sweet spotis just below two meters.That means it is greatfor automatic parkingand blind spot warnings,but it can't detect stoppedtraffic a hundred meters ahead.Radar operates very similar to sonarbut with radio waves,that gives it more rangebut like sonar, it hasrelatively low resolution.Radar can tell you howfar away an object isand how its speed is relative to yours,which is great for thingslike guided cruise control,but over long distances, itcan't reveal important detailslike whether some objectis a truck stopped directlyin your path or a low-hangingsign above the highwayor a cardboard box sittingclose to the road's edge.And like sonar, it gets lessaccurate as distance increases.Sonar and radar only providetwo dimensional information,direction and distance.It tells you there's something over thereand how far away over there is.That's fine for driveraids like parking assistbecause the human drivercan use their eyesto identify what's around andif they need to be worriedabout it. The humansvision is another sourceof information, a way tocross-reference distanceand direction to eliminate errorsand to fill in the 3Dinformation necessaryto move around the environment safely.But to provide the information necessaryfor a self-driving car,without human involvement,Tesla and other companiesincorporate video cameras.Cameras provide a hugeamount of information.And Tesla is so confident aboutcameras that in May, 2021,they stopped puttingradar in the model three,getting rid of radar andrelying solely on camerasis part of Tesla's long-termstrategy for self-driving cars.Elon Musk said that a vision only systemis ultimately all that isneeded for full autonomy.The rationale is thatif you want to make a self-driving car,why not model it after us?We're like perfectcomputers, perfect computers.Computers interpreting datafrom cameras is similarto how a human driversbrain detects road hazardsusing their eyes.Tesla is even callingtheir new radarless system,Tesla Vision, to emphasizethey're all in stance on cameras.But there are a couple of problemswith the vision only strategy.One is that it's not how humans drive.Even though other sensesmight play a lesser role,we still rely on morethan just visual data.We use our hearing to listen for hazards.Our sense of touch to feelchanges in road surfaceor the wind pushing the carand our inner ear tosense changes in velocity.We also use our memoryto keep track of objectsthat briefly leave our visual field,say a cyclist that wejust pass before arrivingat a stop sign.Human drivers cross-reference between lotsof information sources.We don't drive on vision alone.Another problem with camerasis that like sonar and radar,cameras only provide twodimensional information.There's no information aboutdistance in a single image.In our previous episode,we explained how acomputer can compare imagesfrom multiple cameras and identify objectsand determine their distancefilling in that missing 3D information.But even with multiple cameras,accidents happen, to acamera or even a human eye,a light colored object like the roofof an overturned truckmight be undetectableif the background is asimilarly light colored sky.According to Tesla, that'sexactly what happenedin a Florida accident in 2016.The light being reflected by a truckwas similar in color andbrightness to the sky behind it.And neither the Tesla's autopilot computeror the driver saw thetruck as anything otherthan part of the background.Or at least that's how Tesladescribed the incident.It's also possible thatthe cameras couldn't detectthe difference and the human driverwasn't paying enough attention.Ordinarily a human will bemuch better than a cameraat detecting small contrastbetween light and darkand therefore much better atdetecting an object againsta similar background.That limitation of cameras is one reasonwhy we don't have fullyautonomous cars just yet.The presence of a human driverwhose eyes can perceive moreor different information comparedto a camera provides aform of cross-referencingor redundancy in a visual only system.Normal human vision can detectthings that autopilot misses.So is a potential solution,just have better cameras?Well, that presents another problem.Tesla's current hardware version is HW3,homework assignmentthree, get it out of here.I don't do schoolwork no more.(grunts)Tesla's current hardware version is HW3.That's been in use since 2019and the Tesla in thatFlorida crash, that was 2016.So that was like,I don't know, you guys do the math.HW1 was the standard back then,and it only had a singlecamera facing forward.The HW1 autopilot computerwould cross-reference visual informationwith radar data to make decisionsabout potential hazards,crash avoidance features wouldn'tactivate unless the visualand radar data agreed.Agreements help minimizethe car's reactionto false positives.You don't want the car slammingon the brakes every timeit detects a shadow.So it's assumed that a humandriver would be presentand paying attentionand if the camera and radarsystems aren't in agreementabout a hazard, there'sstill someone thereto make the right decision.It's possible that older versionsof the autopilot hardwarelike HW1 were moresusceptible to visual errorsbecause of their singlecamera setup. But the Tesla,which hit the truck overturnedin 2020, it had HW3,the current system.So what's going on withthat Mr. Elon musket?HW3 includes a 360 degreearray of eight cameras,three of which are lookingdown the road for obstacles.One has a wide angle lensfocused 60 meters ahead.One has a narrow focus lenszoomed out to 250 metersand one with an ordinarylens focused at 150 meters.By comparison, it takes a human driverabout a hundred meters to notice a hazard,hit the brakes and cometo a complete stop,if you're going 70 miles per.Having cameras focused atdifferent distances providesyet another form of cross-referencingand to detect a hazard,the Tesla's autopilotcomputer will compare imagesfrom all three cameras.But the autopilot computerisn't just comparing single imageslike we talked about in our last episode,it's comparing video, ever heard of it?Now that video actually isn't recordedat very high resolution, just 720P.If you want to see what that looks like,you could choose 720P asan option for this video.It's really not that bad, how I look?Okay for the four people who know thatthat's not how it works,don't yell at me, Okay.I know it. I'm just, it's a joke.Now go back and tell your large mom I say,I'll be home for dinner.You may not be able tosee each individual hair,but you can still clearlysee that I have a mustacheand you wouldn't have anytrouble seeing a truckif it was about to run me over.A 720 P video camera meansthe total amount of datafor each frame is 921,600 pixels,or about 0.9 megapixel.The image processor withinthe HW3 autopilot computercan process a maximum of 1billion pixels per second.That is a lot, but each camerais producing 36 frames per second,for about 33 million totalpixels per camera every second,multiply that times eight cameras.And that's about, I don't know,250 million pixels per second being sentto the imaging processor.Having a processor with a maximum capacitythat is four times the average amountof data it receives ensuresit doesn't get overwhelmed.Also all the images comingfrom one second to the next,have a lot of similar data.So it doesn't have to reprocessall of that extra data.So it's using lessprocessing power to do that.That prevents delaysin identifying objects,which obviously is important for safety.But it couldn't safely processan image quality much higher than that.If you increase the videoresolution to something, say,I don't know, 4K, the gold standard,at 60 frames per second,that same image processorwould be overwhelmed processing the datafrom just two cameras.So Tesla's use videocameras operating at 720Pbecause that's probably enough detailto recognize most hazards.And it works well with processors capableof crunching through 1billion pixels a second.The current problem for avisual only autonomous vehicle,is not getting cameras orcomputers that are good enough.It's actually havingsoftware sophisticated enoughto sift through the huge amounts of datathat has to be interpreted.That software needs to comeup with the right answerwhen it's not a hundred percent clearif something is a truck, ashadow, or just part of the sky.The Tesla that collided withthat overturned truck almostcertainly detected somethingwith its cameras, butit couldn't figure outwhat it was seeingand decided it wasn't ahazard in the car's path.Decisions like that are the problemsthat keep Tesla's engineers up at night.The short version of howTesla solves those problemsis using machine learning.That's when software is ableto generalize from experience.Here's an overly simple exampleto at least get a sense of how it works.You feed a bunch of visualimages into a computer program,and some of them have overturnedtrucks, some are shadows,and some were just bright sky.The images are fromlots of different angleswith lots of differentlighting conditions.For each image, theprogram chooses whetherit thinks there's a truckin the image or not.It's then told whether it's correctand it remembers that.With lots of othercharacteristics of that image.If you repeat this process enough,eventually the computerprogram will get very goodat distinguishing betweenimages with, and without trucks,even ones that have anglesand lighting conditionsthat it's never seen before.Obviously creating a programlike that and training ittakes a lot of work and along time. But in essence,that is what Tesla has been working on.They've developed theworld's most sophisticatedoverturned truck identification program.And they've also taught itto identify pedestrians,stop signs, curbs, othercars, cyclists, buildings,parking spaces, and prettymuch everything elsethat's relevant to a vehicle on the road.That is a huge achievement.But to some other folks workingon self-driving technology,it's also a huge waste of timebecause a ton of thatinformation can be filled inby lidar instead.You know what I wantto get filled in with,your mom's cream pudding.A car with lidar uses lasers to scanthe environment aroundit and to its proponents,it bridges the gapbetween radar and cameras.It constructs a 3D modelof the environment completewith distance and directionlike you get with radarand shape and structure,like you get with cameras. With lidar,you don't need a complicated pieceof curb identification softwarethat you have to teachto interpret visual datato distinguish betweencurbs and lane lines.That's because a car doesn'tneed to know what a curve is.It just needs to know thatthere are small hard ledges overthere that you shouldn't run over.Proponents of lidar alsosay that visual systemshave problems with thekind of data they deliver.Too much of it is simply irrelevant.An autonomous car doesn'tneed to know what coloran overturned truck is.Hitting a red truck is justas bad as hitting a blue one.Can't we all just get along?It doesn't even need to knowthat it's a truck just thatthere is some solid objectdirectly in the cars path.But things like color data,which isn't important for manyinstances of crash avoidanceis part of the informationbeing sent from the cameraand all of that extra informationhas to be handled by the image processorwhich sometimes makes it's jobharder than it needs to be.Of course, for a fully self-driving car,you do need some visual informationlike color to identifystoplights, for example.Well, actually not reallycause it goes red, yellow...So they could just look at position.All right. (chuckles)And that's why lidar proponents all agreethat it's just one part of a systemthat cross-referenceswith others like camerasand radar to give the most complete,most efficient picture of the world.The disagreement between ElonMusk and proponents of lidar,which includes Googleand every other car makerother than Tesla, isn't aboutwhether to use cameras or not.It's whether the best solutionto driverless cars reallycould be visual only. Ofcourse, lidar isn't perfect.Musk is right about onething, it's expensive.It uses bulky ugly equipmentand it's susceptible to weatherand interference meaningit might not be suitablefor a level five carwhich needs to be ableto go anywhere regardless of conditions.Or at least it was all ofthose things until recently.Researchers at the University of Texas,hell yeah longhorns,and the University of Virginia,yeah go buck weevils,have developed lidar technologyusing something calledan avalanche diode,that multiplies the electronsemitted by a lidar device.This massively increase lidar sensitivitywhile simultaneouslydecreasing the size neededto make a sensitive lidar detectorand removing the need forbulky, cooling devices.Lots of current highsensitivity lidar detectorswon't work unless they'rehundreds of degrees below zero.That's pretty fricking insane.That's rolling around on your car.Unfortunately, that doesn'tmake the technology any cheaper.But what might do thatis a big companyintegrating lidar technologyinto their cars andthat could be happening.A Tesla with factoryplates was spotted testinga lidar system in Florida recently.And that system was froma company called Luminar.Tesla has partnered with Luminarfor testing and developing purposes.So it's possible thatTesla is coming around.Elon Musk has evenopenly softened on lidar.The tech has always beenused by his SpaceX company.They even build theirown custom lidar system.When it comes to lidar he now saysthat much of what he said before had beenjust talking smack.Thank you guys so much forwatching this episode of B2B.What do you guys think aboutTesla and their technology?Do you like it? Do you hate it?Leave a comment down below.I read through the comments still.I still go through them.I'm always looking.Follow us here on Instagram @donutmedia.Follow me on Instagram @jeremiahburton.Follow me on Tik Tok @silenceofthelambda.Until next week.Bye for now.