HDR+ Technology: A Game-Changer in Photography
HDR+ technology was developed by Mark Lavoie, a former professor at Stanford who left to work at Google to develop this innovative technology for the Nexus 6. The technology uses a unique approach to take multiple photos with extremely short exposure times, resulting in images that are never overexposed and retain incredible dynamic range. This process, known as HDR+, allows the camera to capture the best parts of each image and combine them to create a single photo that is both bright and detailed.
The original Nexus 6 phone required users to wait for a circle to complete before viewing the image, indicating that the processing time was significant. However, Google's subsequent releases, such as the Pixel and Pixel XL, improved upon this technology by stacking the image sensor directly on top of the DRAM chip. This allowed for faster readout times and quicker processing, enabling users to see their images in real-time.
The HDR+ technology has been a key factor in the success of Google's Pixel series of phones, which have become renowned for their exceptional camera capabilities. The advanced software-based approach has made it possible for even lower-end phones to compete with the high-end devices, as seen with Xiaomi's recent release, the Me 2.
Future Developments and Advancements
Nvidia is working on several technologies that will further enhance image processing capabilities in smartphones. One of these developments is a new denoising algorithm that can completely remove noise from images, making HDR+ even more effective. By using machine learning algorithms to analyze and correct noise, this technology has the potential to produce even cleaner and more detailed images.
Another exciting development is Nvidia's enhanced AI-powered content-aware fill feature. This technology allows users to scrub over unwanted parts of an image, such as wrinkles or objects, and have them automatically removed or replaced with a suitable alternative. The results are stunning, making this technology a game-changer for smartphone photography.
The Benefits of Software-Based Camera Technology
One of the significant advantages of HDR+ technology is that it doesn't require significant hardware upgrades to achieve its exceptional results. This means that even lower-end phones can benefit from Google's software-based approach, making high-quality camera capabilities more accessible to a wider range of users. As such, we can expect to see even more smartphones with impressive camera capabilities in the future.
The Future of Smartphone Cameras
With Google's continued innovation and advancements in HDR+ technology, as well as the development of new technologies like denoising algorithms and AI-powered content-aware fill, smartphone cameras are set to become even more sophisticated. The potential for these technologies to improve image quality, reduce noise, and enhance overall photography experience is vast.
While we can't wait to see what Google will do with their next-generation camera technology, it's clear that the company has made significant strides in recent years to establish themselves as a leader in smartphone cameras. As the competition heats up, we can expect to see even more exciting developments and advancements in this rapidly evolving field.
"WEBVTTKind: captionsLanguage: enso today I want to talk about computational photography because honestly this technology is going to completely revolutionize the quality of your smartphone camera okay so when I say computational photography you might be thinking those AI or Scene Recognition modes that companies like Huawei have introduced in their phones and that's part of it but that's not what I want to talk about I want to talk about the computational photography that makes all of your photos way better so one of the examples of computational photography that you're probably familiar with is Google's hdr+ technology now this was developed by a dude named Mark Lavoie who used to be a professor at Stanford but he actually left to work at Google to develop this technology for the Nexus 6 his share plus works by taking a bunch of photos with really short exposure times this means that the highlights are never gonna get blown out and really they're just very underexposed but it takes the cleanest of those images and then stacks all the other images on top and that way it's able to take the average of all those pixels raise the shadows and come up with an image that has really great dynamic range now this process actually took quite a bit of time on the Nexus 6 in the Nexus 6p you probably remember hitting the shutter button and then waiting a while for that circle to go around and round before you could actually look at the image it took the Google pixel got around this by stacking the image sensor directly on top of the DRAM chip this made it so it could read out the data way faster put it through that Snapdragon processor and produce an HDR Plus image before you even knew it was taking one this is what's led to the amazing photo quality of the Google pixel and the Google pixel - and you probably didn't even realize that you were taking an HDR plus image but you were but let's look into the future a little bit you know CSI they can zoom into an image and enhance it to suddenly make the image way more clear and just a lot cleaner well Nvidia is actually working on a lot of similar technologies that are gonna make that a reality a really cool example that I saw from Nvidia was this new denoising algorithm that they've got so basically you take a really noisy image like any regular image that you would get in photography and they can completely denoise it by running through a machine learning algorithm this could actually make hdr+ way better because when you raise the shadows of a really underexposed photo it generally adds noise so now Google could take even shorter exposures get darker images raise the shadows and just get rid of any of the noise that they've got there that way you've got more dynamic range in a cleaner image over all the other cool party trick that in videos got is a new enhanced AI content to work fill so if you're not exactly privy to what content aware fill is basically in Photoshop or another similar program you can scrub over an portion of the image and the program will basically fill in that part with whatever it thinks should be there well it's never really been that great in the past and you've had to use some other techniques to clean it up Nvidia showed off a version of it that is just mind-blowing basically you could scrub it over someones face and just completely get rid of wrinkles get rid of that chair in the back of the room or annoying painting that you don't want to be on the wall it'll basically fill it in with what it thinks based on the AI machine learning algorithm said it's got running and this is way way better than anything we've ever seen before now we saw Google start to do this before about a year ago they had a new feature in Google Photos that they called remove object now this hasn't exactly come to fruition yet but the whole idea is that you could be at your son's baseball game take a picture of him through the grate and then just get rid of the grate in post and that way you just see your son and all the air around him there's nothing actually obstructing your view now the craziest part about all of this is none of these techniques actually use that much extra hardware in your phone Google's pixel and pixel two phones are still considered some of the best camera phones on the market right now and almost all of that is software it's only gonna get better over time and I can't wait to see what Google does with the pixel 3 the nice thing about most of this being software is that even lower end phones can get some pretty stellar cameras xiaomi recently released the me a 2 which is a 250 euro phone that has a really good camera it is an Android one phone which means that they worked in very close association with Google and I assuming that Google sprinkled a little bit of their software magic in there to make it compete with the Google pixel one and even the Google pixel two you can probably expect more low-end phones to get some pretty good cameras in the future but I'm even more excited with what you can do with a really high-end phone with a great processor anyways thanks for watching guys let me know how you like the format of this video I'm trying to be a little more narrative with the videos in between all these big phone launches so leave your comments in the comment section below I'll jump down there and talk to you guys soon and I'll see you laterso today I want to talk about computational photography because honestly this technology is going to completely revolutionize the quality of your smartphone camera okay so when I say computational photography you might be thinking those AI or Scene Recognition modes that companies like Huawei have introduced in their phones and that's part of it but that's not what I want to talk about I want to talk about the computational photography that makes all of your photos way better so one of the examples of computational photography that you're probably familiar with is Google's hdr+ technology now this was developed by a dude named Mark Lavoie who used to be a professor at Stanford but he actually left to work at Google to develop this technology for the Nexus 6 his share plus works by taking a bunch of photos with really short exposure times this means that the highlights are never gonna get blown out and really they're just very underexposed but it takes the cleanest of those images and then stacks all the other images on top and that way it's able to take the average of all those pixels raise the shadows and come up with an image that has really great dynamic range now this process actually took quite a bit of time on the Nexus 6 in the Nexus 6p you probably remember hitting the shutter button and then waiting a while for that circle to go around and round before you could actually look at the image it took the Google pixel got around this by stacking the image sensor directly on top of the DRAM chip this made it so it could read out the data way faster put it through that Snapdragon processor and produce an HDR Plus image before you even knew it was taking one this is what's led to the amazing photo quality of the Google pixel and the Google pixel - and you probably didn't even realize that you were taking an HDR plus image but you were but let's look into the future a little bit you know CSI they can zoom into an image and enhance it to suddenly make the image way more clear and just a lot cleaner well Nvidia is actually working on a lot of similar technologies that are gonna make that a reality a really cool example that I saw from Nvidia was this new denoising algorithm that they've got so basically you take a really noisy image like any regular image that you would get in photography and they can completely denoise it by running through a machine learning algorithm this could actually make hdr+ way better because when you raise the shadows of a really underexposed photo it generally adds noise so now Google could take even shorter exposures get darker images raise the shadows and just get rid of any of the noise that they've got there that way you've got more dynamic range in a cleaner image over all the other cool party trick that in videos got is a new enhanced AI content to work fill so if you're not exactly privy to what content aware fill is basically in Photoshop or another similar program you can scrub over an portion of the image and the program will basically fill in that part with whatever it thinks should be there well it's never really been that great in the past and you've had to use some other techniques to clean it up Nvidia showed off a version of it that is just mind-blowing basically you could scrub it over someones face and just completely get rid of wrinkles get rid of that chair in the back of the room or annoying painting that you don't want to be on the wall it'll basically fill it in with what it thinks based on the AI machine learning algorithm said it's got running and this is way way better than anything we've ever seen before now we saw Google start to do this before about a year ago they had a new feature in Google Photos that they called remove object now this hasn't exactly come to fruition yet but the whole idea is that you could be at your son's baseball game take a picture of him through the grate and then just get rid of the grate in post and that way you just see your son and all the air around him there's nothing actually obstructing your view now the craziest part about all of this is none of these techniques actually use that much extra hardware in your phone Google's pixel and pixel two phones are still considered some of the best camera phones on the market right now and almost all of that is software it's only gonna get better over time and I can't wait to see what Google does with the pixel 3 the nice thing about most of this being software is that even lower end phones can get some pretty stellar cameras xiaomi recently released the me a 2 which is a 250 euro phone that has a really good camera it is an Android one phone which means that they worked in very close association with Google and I assuming that Google sprinkled a little bit of their software magic in there to make it compete with the Google pixel one and even the Google pixel two you can probably expect more low-end phones to get some pretty good cameras in the future but I'm even more excited with what you can do with a really high-end phone with a great processor anyways thanks for watching guys let me know how you like the format of this video I'm trying to be a little more narrative with the videos in between all these big phone launches so leave your comments in the comment section below I'll jump down there and talk to you guys soon and I'll see you later\n"