Google Lens new features hands-on

Google IO 2018: Revolutionizing Google Lens with AI and AR

As we gathered at day one of Google IO, we were excited to explore the latest features for Google Lens, an AR and AI platform built into Google Assistant. The platform is now integrated directly into the smartphone's camera, allowing users to access a wide range of capabilities beyond just identifying objects in photos. Last year, Google introduced Google Lens as a way to look through the camera's viewfinder and identify objects in images. However, this latest iteration is much more sophisticated, leveraging all of Google's understanding of natural language processing and object recognition to create a powerful tool for scanning the world around us.

The smartphone camera is now equipped with Google Lens, allowing users to access its capabilities from anywhere. The platform can recognize objects, parse human language, and understand the world around us like never before. Prior to today, Google Lens was only accessible within the Google Assistant app. However, this latest development allows users to access it directly through their smartphone's camera, making it a powerful tool for everyday life.

There are three ways to access Google Lens: opening the camera and clicking the Google Lens button, touching and holding the home button to launch Assistant and click the lens button, or double-tapping on the camera button. The first method allows users to start scanning objects in their viewfinder, while the second method uses touch gestures to activate the platform. The third method is available only on the LG G7 and involves a double tap on the camera button.

One of the most impressive features of Google Lens is its ability to identify objects with colored dots that indicate what they are. Tapping on one of these dots pulls up Google search results, demonstrating the platform's understanding of natural language processing and object recognition. For example, if you point the camera at a piece of art by Pablo Picasso, it will recognize the image and provide information about the artist.

But Google Lens is not just limited to identifying objects in images. It can also parse text from books, menus, and other sources. Pointing the camera at a book jacket or menu can pull up images of the relevant items, allowing users to scan and understand the world around them. The platform even supports translation, allowing users to translate text from one language to another.

The underlying technology behind Google Lens is rooted in foundational AI work that enables Google's AR experiences. This allows developers to create immersive virtual 3D images that can be viewed in real-time. For example, a developer could create an entire 3D image of a painting, complete with reflections and details, using the platform's capabilities.

This technology also enables users to point their camera lens at a podium or other object and have an entire 3D image come to life before them. The possibilities are endless, and developers will be able to create new experiences that take advantage of Google Lens' capabilities.

The features of Google Lens are set to launch later this month and will be available on more than just pixel devices. However, users will still need to access the platform through the Assistant app, rather than directly from their smartphone's camera. IOS users will also be able to access Google Lens within the Assistant, but they will not be able to use it directly from the iPhone's camera.

In conclusion, Google Lens is a revolutionary platform that uses AI and AR to create a powerful tool for scanning and understanding the world around us. With its ability to identify objects, parse text, and recognize natural language, Google Lens is set to change the way we interact with technology and the world around us.

"WEBVTTKind: captionsLanguage: en- So we're here at day one of Google IOchecking out new features for Google Lens.It's an AR and AIplatform for the company,and it's basically builtinto Google Assistant,and now it's built right intothe smart phone's camera.So Google first introducedGoogle Lens last year,and basically at thetime, it was a way to lookthrough the camera's viewfinderand identify objects in photos.Now Lens is much more sophisticated.It uses all Google's understandingof natural language processing,and object recognition, image recognition.It combines it into one big platform.So that the smart phonecan see and understandthe world around it and itcan parse human language.Prior to today, Google Lenswas only available withinGoogle Assistant.Now it works right fromthe smart phone's cameraand it works in other devices.Right here we have an LGG7and we have a wholewall of props behind usthat we can use Google Lens to identifyand get information from Google Search.There are three waysto access Google Lens.The first is to just open the cameraand click the Google Lens button.From there the phone starts lookingand trying to identifyobjects it sees throughthe viewfinder.The second way to access Google Lensis basically just bytouching and holding the homebutton down here launchingAssistant and just clicking thelens button.And as you can see right now,Lens already sees and identifies objectswith these little colored dots,that's how it knows what it is.Tapping on one of the dots,will pull up Google search results.So you see it understandsthat this is an albumby Justice Woman andconveniently Justice happensto be the artist performingat Google IO tomorrow.And the third way toaccess Google Lens willbe a double tap on the camera button,but that only works on the RGG7.If you look at some of the clothing here,whoop, doesn't quiteidentify the clothing,but it asks if I like the clothing.I guess it's trying to builda preference profile for me.Let's try this one.Whoop, there it goes, itpulled up shopping resultsfrom Macy's, from QVC.So it understands whatthis item of clothing isand it then prompts you to buy it online.Now as you scan GoogleLens over other objects,it'll slowly start torecognize everything elsethat you pan it over.So we have a piece of art right here,that is not correct,hold on.Looking for results.There we go.So it went from the album,but now it knows this is apainting by Pablo Picasso.Right here it sees a photo.And it knows that wasa Norwegian Lundehund.I don't think I pronounced that right,but it is a dog breedand Google identified it.So Google Lens isn't justfor photos and objects.You can do a lot with text now,that includes text insidethe book jacket of a book,it includes text on menus at restaurants.You can point the camera ata whole list of food itemsand you can pull up imagesof those food items.You can pull up YouTubevideos of how to make them.You can even translatethose food items if they'rein another language intoEnglish or into Spanishor into any other language that you wantthat Google Translate supports.Now if you're looking at a book,for instance, like the bookSwing Time by Zadie Smith,you can look at huge passages of text,you can even grab thattext using Google Lensand you can pull it outas if you had just copiedand pasted it from a document.From there you can translatethat text into another languageyou can even then doGoogle searches on it.Google Lens essentially takes textfrom anywhere out in the world,street signs, restaurant menus, even booksand it makes that text searchable.Now the underlying technologybehind Google Lens,it isn't just for basicallylooking through a smart phoneviewfinder and looking at products ortrying to translate text.What powers Google Lens is,a lot of the foundationalAI work that lets Googledo AR experiences.So for instance, because Google's softwareand the phones that power that softwarecan understand and see the world,you can create whole virtual 3D images.For instance, you can havepaintings come to liferight out in front of youand you can walk around,you can even see the reflectionsof objects behind youin those 3D images,if developers design them in the right wayand know what environmentyou're standing in.That's pretty wild.You can also point yourcamera lens at a podiumand have an entire 3D imagecome to life in front of you,grow up into the skyand encompass the entirevertical area around you.Now these Google Lens featuresare all coming later this monthand as Google said onstage at the IO keynote,they're coming to morethan just pixel devicesand within the Assistant.You'll also be able to access them in IOSfrom within the Assistant itself.But you have to use theAssistant, you won't be ableto access it from theIphone's camera, of course.For all the news andannouncements from Google IO 2018,check out TheVerge.com andsubscribe to us on YouTubeat youtube.com/theverge.\n"