**The Future of Robotics: A Combination of Brains and Brawn**
In a recent demonstration, Google showcased a unique robot that combines AI language processing with physical ability to navigate the real world. This robot is not just limited to performing specific tasks but can understand what humans want and execute commands in a more versatile manner.
A traditional industrial robot, like those used for installing windshield wipers or soldering capacitors onto circuit boards, is very specific and scripted. However, this Google robot is different. It's open-ended and has learned from an incredible wealth of knowledge on the internet, which allows it to understand the components of a hamburger. This is not something that Google had planned out in advance but rather a random question that was answered in the moment.
The robot itself was designed by an Alphabet subsidiary called Everyday Robots, with the goal of building everyday robots that will show up in your home or workplace. These robots are designed to move around, grasp things, and have digital vision. When combined with Google's AI framework, this creates a potentially more useful device for everyday life.
We've seen plenty of robots before from companies like Boston Dynamics, which can run over obstacles or perform physical tasks. However, these robots are often limited in their abilities. The Google robot is different. It combines the brains and brawn, with AI language processing and physical ability to navigate the real world.
**Comparing the Future of Robotics**
When it comes to home helpers like Amazon's Astro, which can bring you a can of Coke from the fridge and wheel it into your bathtub, we're still in the early days. These devices are impressive but limited in their capabilities. The Google robot, on the other hand, is another level entirely. It understands what humans want and can execute commands in a more versatile manner.
One of the interesting things about this robot is its design for the chaos and unpredictability of the real world. Boston Dynamics has impressive physical navigation abilities with robots like Atlas, which can do parkour and navigate complex terrain. However, these robots are limited in their ability to execute commands. The Google robot combines going places with doing things, making it a more versatile device.
**The Future of Robotics: Transforming Industries**
We're on the cusp of a transformation in the computer industry from machines that can do specific tasks to machines that can handle real-world situations. This is where AI technology comes in, and we're seeing this transformation play out with robots that can navigate complex landscapes.
When you combine AI with physical ability to navigate the real world and take actions, that's potentially very transformative. The future of robotics looks exciting, but also a little terrifying. Sometimes, technology like this can be overwhelming, but it's also what makes it so exciting.
**What Do You Think?**
We want to hear from you! What do you think about the future of robotics? Is it exciting and transformative, or is it something that scares you? Share your thoughts in the comments below. Don't forget to like and subscribe for plenty more videos on robotics, flying machines, and everything else that's shaping the world of tomorrow.
"WEBVTTKind: captionsLanguage: engoogle wants to make robots smarter by teaching them to understand human language and then acting on it in the real world melding the physical capabilities of walking roaming robots and giving them the kind of intuitive ai powers that you'd expect from a voice assistant or a smart speaker it's a new technology called palm seikan and it takes google smarts in natural language processing and machine learning and bakes them into robots built by a company called everyday robots and it's something we haven't seen before this robot doesn't need to be programmed with really specific instructions like if this then that it can take vague instructions like i'm hungry or i'm thirsty and then work out the steps it needs to take to solve that problem up until now we've seen robots out in the real world doing parkour and really physical activities and we've seen conversational ai driven voice assistants but now google has combined the two this is a huge deal for the future of robotics and human assistance so we thought for this week's episode of what the future we will try something a little bit different i have my colleague stephen shankland here to tell me why it's such a game changer now shanks you and i were both at this google demo it was kind of impressive to see can you give me the basic rundown of what google was doing sure this is a technology called palm seikan and it combines two very different technologies the first one is called palm which is google's very sophisticated very complicated natural language processing engine so this is an ai system that's trained on millions of documents mostly from the internet and that is combined with the physical abilities of a robot they have trained a robot to take a number of actions like moving around a kitchen grasping objects recognizing objects they start with this language model you can give some you can give the robot a natural language command like i've spilled my drink i need some help the robot comes up with a number of possible actions but then it grounds those possible actions and what the robot actually knows how to do so the marriage of the language model and the real world abilities is what's interesting here we saw these great demos of a robot picking up a number of different like bowls and blocks in different colors and it knew that the yellow bowl stood for the desert and the blue bowl stood for the ocean how is it recognizing those things this is what it learns from the real world language information that it's been trained on it knows sort of a metaphorical level that green means jungle blue means ocean and yellow means desert so for example by reading the novel dune it can learn that the yellow desert it might be a phrase that shows up somewhere so it can learn to associate these things so it actually attains sort of a metaphorical reasoning level that's much more human-like than what we've seen in most robots which are extremely literal extremely precisely scripted and strictly programmed to do a very narrow set of operations so this is much more open-ended yeah i remember with that hamburger demo they showed us a couple of demonstrations of stacking blocks and bowls but then i asked whether they could ask the robot to make a hamburger and it just picked up the pieces and put them in the order it did put an entire bottle of ketchup in the middle of the hamburger which was peak robot behavior but i loved that you don't actually have to say put hamburger patty put lettuce on top of hamburger patty if lettuce then tomato it kind of just knows how to do that all at once yeah so a traditional industrial robot that's maybe installing windshield wipers or soldering capacitors onto a circuit board that's a very specific very scripted activity this is very open-ended and because it's learned from this incredible wealth of knowledge that's on the internet it knows what the components of a hamburger might be it was a pretty interesting demonstration and it was it was not something that google had planned out in advance that was your random in the moment question so this was you know a good example a good illustration of how this robot can you know be more improvisational we've seen plenty of robots before from the likes of boston dynamics you know running over obstacles or i saw the amica robot at ces which has this very humanoid face and was able to respond with natural language but those are kind of examples of like physical real world robot and then natural language in a kind of a human-like suit right this is something that's quite different to those one of the reasons this is such an interesting demonstration is it combines the brains and the brawn it's got the it's got the ai language processing and it's got some physical ability to actually go out in the real world the robots themselves were designed by an alphabet subsidiary called everyday robots and they want to just build everyday robots that will show up in your house or your workplace and so they they're designed to actually you know move around and grasp things and they have you know digital vision and so with that combined with the google framework is you know something that's potentially more useful in the house if they can actually you know develop this for another few years to get it out of the research uh lab and into your home yeah so i mean we've seen robots like say astro from amazon which is a little home helper you know can bring you a can of coke from the fridge and wheel it into your into your bathtub i saw that demo from our smart home team what would be the future of this kind of robot in the home context compared to some of the other home helpers we've seen before if you look at a lot of these other alternatives it's you know kind of a smartphone with a bit of navigation glued on top so you know amazon astro it's you know it's impressive it's a first step but this is you know another level entirely when it comes to understanding what humans want and understanding what the robot itself can do it's much more potentially open-ended and therefore much more versatile i guess one of the interesting things here that i saw from the robot demonstration at google is this is designed for the chaos and unpredictability of the real world if you compare it to boston dynamics they have very impressive physical real world navigation abilities you know the atlas robot can do parkour it can do flips the spot dogs that can go up and down stairs deal with very complicated terrain but those don't really have a lot of abilities in terms of actually executing commands they can go places but they can't do things the google robot is a combination of going places and doing things yeah i feel like you're kind of combining like the football team with the chess club into one robot so if you think about where this goes in the future maybe five ten twenty years from now what could the future of this kind of technology bring us obviously it's very early days but it's pretty exciting right yeah so what we've seen with the ai revolution is a complete transformation of the computer industry from uh machines that could do a very specific task to machines that could handle really complicated uh real-world situations some of those things are very difficult like driving a car in a street incredible number of unpredictable events that could happen in that situation but ai technology is good enough that it can start to deal with this really really complicated landscape instead of something you know very limited like driving a shuttle bus down a track and back and down the track and back right so this is this is what ai opens up when you build that into a robot it's very complicated and you i think your you know 10 or 20 year time horizon is more likely what we're looking at here but when you combine that ai with this physical ability to navigate the real world and take actions then that's potentially very transformative so there you have it but i'm interested to know what you think is this the future of robotics or is it kind of terrifying or is it both because sometimes robotics and technology is like that let me know in the comments down below and while you're here throw us a like and subscribe for plenty more what the future videos we've got amazing stuff on robotics flying machines everything you could possibly want all right until next time i'm claire riley for cnet bringing you the world of tomorrow todaygoogle wants to make robots smarter by teaching them to understand human language and then acting on it in the real world melding the physical capabilities of walking roaming robots and giving them the kind of intuitive ai powers that you'd expect from a voice assistant or a smart speaker it's a new technology called palm seikan and it takes google smarts in natural language processing and machine learning and bakes them into robots built by a company called everyday robots and it's something we haven't seen before this robot doesn't need to be programmed with really specific instructions like if this then that it can take vague instructions like i'm hungry or i'm thirsty and then work out the steps it needs to take to solve that problem up until now we've seen robots out in the real world doing parkour and really physical activities and we've seen conversational ai driven voice assistants but now google has combined the two this is a huge deal for the future of robotics and human assistance so we thought for this week's episode of what the future we will try something a little bit different i have my colleague stephen shankland here to tell me why it's such a game changer now shanks you and i were both at this google demo it was kind of impressive to see can you give me the basic rundown of what google was doing sure this is a technology called palm seikan and it combines two very different technologies the first one is called palm which is google's very sophisticated very complicated natural language processing engine so this is an ai system that's trained on millions of documents mostly from the internet and that is combined with the physical abilities of a robot they have trained a robot to take a number of actions like moving around a kitchen grasping objects recognizing objects they start with this language model you can give some you can give the robot a natural language command like i've spilled my drink i need some help the robot comes up with a number of possible actions but then it grounds those possible actions and what the robot actually knows how to do so the marriage of the language model and the real world abilities is what's interesting here we saw these great demos of a robot picking up a number of different like bowls and blocks in different colors and it knew that the yellow bowl stood for the desert and the blue bowl stood for the ocean how is it recognizing those things this is what it learns from the real world language information that it's been trained on it knows sort of a metaphorical level that green means jungle blue means ocean and yellow means desert so for example by reading the novel dune it can learn that the yellow desert it might be a phrase that shows up somewhere so it can learn to associate these things so it actually attains sort of a metaphorical reasoning level that's much more human-like than what we've seen in most robots which are extremely literal extremely precisely scripted and strictly programmed to do a very narrow set of operations so this is much more open-ended yeah i remember with that hamburger demo they showed us a couple of demonstrations of stacking blocks and bowls but then i asked whether they could ask the robot to make a hamburger and it just picked up the pieces and put them in the order it did put an entire bottle of ketchup in the middle of the hamburger which was peak robot behavior but i loved that you don't actually have to say put hamburger patty put lettuce on top of hamburger patty if lettuce then tomato it kind of just knows how to do that all at once yeah so a traditional industrial robot that's maybe installing windshield wipers or soldering capacitors onto a circuit board that's a very specific very scripted activity this is very open-ended and because it's learned from this incredible wealth of knowledge that's on the internet it knows what the components of a hamburger might be it was a pretty interesting demonstration and it was it was not something that google had planned out in advance that was your random in the moment question so this was you know a good example a good illustration of how this robot can you know be more improvisational we've seen plenty of robots before from the likes of boston dynamics you know running over obstacles or i saw the amica robot at ces which has this very humanoid face and was able to respond with natural language but those are kind of examples of like physical real world robot and then natural language in a kind of a human-like suit right this is something that's quite different to those one of the reasons this is such an interesting demonstration is it combines the brains and the brawn it's got the it's got the ai language processing and it's got some physical ability to actually go out in the real world the robots themselves were designed by an alphabet subsidiary called everyday robots and they want to just build everyday robots that will show up in your house or your workplace and so they they're designed to actually you know move around and grasp things and they have you know digital vision and so with that combined with the google framework is you know something that's potentially more useful in the house if they can actually you know develop this for another few years to get it out of the research uh lab and into your home yeah so i mean we've seen robots like say astro from amazon which is a little home helper you know can bring you a can of coke from the fridge and wheel it into your into your bathtub i saw that demo from our smart home team what would be the future of this kind of robot in the home context compared to some of the other home helpers we've seen before if you look at a lot of these other alternatives it's you know kind of a smartphone with a bit of navigation glued on top so you know amazon astro it's you know it's impressive it's a first step but this is you know another level entirely when it comes to understanding what humans want and understanding what the robot itself can do it's much more potentially open-ended and therefore much more versatile i guess one of the interesting things here that i saw from the robot demonstration at google is this is designed for the chaos and unpredictability of the real world if you compare it to boston dynamics they have very impressive physical real world navigation abilities you know the atlas robot can do parkour it can do flips the spot dogs that can go up and down stairs deal with very complicated terrain but those don't really have a lot of abilities in terms of actually executing commands they can go places but they can't do things the google robot is a combination of going places and doing things yeah i feel like you're kind of combining like the football team with the chess club into one robot so if you think about where this goes in the future maybe five ten twenty years from now what could the future of this kind of technology bring us obviously it's very early days but it's pretty exciting right yeah so what we've seen with the ai revolution is a complete transformation of the computer industry from uh machines that could do a very specific task to machines that could handle really complicated uh real-world situations some of those things are very difficult like driving a car in a street incredible number of unpredictable events that could happen in that situation but ai technology is good enough that it can start to deal with this really really complicated landscape instead of something you know very limited like driving a shuttle bus down a track and back and down the track and back right so this is this is what ai opens up when you build that into a robot it's very complicated and you i think your you know 10 or 20 year time horizon is more likely what we're looking at here but when you combine that ai with this physical ability to navigate the real world and take actions then that's potentially very transformative so there you have it but i'm interested to know what you think is this the future of robotics or is it kind of terrifying or is it both because sometimes robotics and technology is like that let me know in the comments down below and while you're here throw us a like and subscribe for plenty more what the future videos we've got amazing stuff on robotics flying machines everything you could possibly want all right until next time i'm claire riley for cnet bringing you the world of tomorrow today\n"