AI beats human champion drone pilot _ The Weekly Roundup

**Drone Racing: The Rise of Autonomous Technology**

In a groundbreaking achievement, a drone using advanced computer vision AI has managed to beat the world's best drone pilots in a race. The autonomous racing drones from the University of Zurich made headlines last year when they proved they could be competitive with human pilots. However, many considered it an unfair advantage since they were using a motion capture system to capture real-time position information and feed this into the drone's navigation system.

This time, however, the drone was only allowed to use onboard computer vision systems and had access to the same information as the human pilots, who practiced on the course and had a live video feed. Three world-class human pilots were invited to Zurich for this race, along with Thomas Biddeman, the University of Zurich's top-ranked drone pilot. The top speed was 80 kilometers per hour, and the vision-based autonomous drone outraced the fastest human by 0.5 seconds during the three-lap breaks.

For context, the usual difference between first and second in these races is one or two tenths of a second. This achievement marks a significant milestone in the development of autonomous technology in drone racing. The AI team behind this feat is excited about what this result means for both drone racing as a sport and the possibilities of computer vision when applied to a less controllable environment than a race track.

**AI in Medical Diagnosis: Improving Breast Cancer Screening**

New data is showing that two is better than one when it comes to AI's performance in breast cancer screening. A large-scale study published this month in The Lancet Digital Health directly compares an AI's performance in breast cancer screening whether it's used alone or with human expertise. The study determined that AI alone performed worse than a radiologist but when AI referred cases that it was unsure about, the combined team of AI and the doctor was 2.6 better at detecting breast cancer than a doctor working alone.

This finding is significant, as it suggests that AI can be a valuable tool in improving breast cancer screening, particularly for patients who may not have access to radiologists or other medical professionals. According to Dr. CODIS Langloths, director of Stanford Center for Artificial Intelligence Medicine and Imaging, "We often say that AI will not replace Radiologists this study doesn't change that but in the proposed AI-driven process, nearly three quarters of the screening studies didn't need to be reviewed by a radiologist while improving accuracy overall."

This breakthrough is groundbreaking, as it shows that AI can enhance the accuracy and efficiency of breast cancer screening without replacing human doctors. The study's findings have far-reaching implications for the medical field, where AI is increasingly being used to improve diagnosis and treatment.

**Understanding How Babies Learn: A Breakthrough in Neural Networks**

DeepMind has developed a neural network trained to understand basic properties of objects and then reacts with surprise when a scenario differs from its expectations. This is designed to mirror the way developmental psychologists test how babies understand the motion of objects by tracking their gaze when shown a video of, for example, a ball that suddenly disappears.

The software model, named Plato, was trained with videos showing simple mechanisms such as a ball rolling down a slope or two balls bouncing off each other. The model developed the ability to predict how those objects would behave in different situations and then developed a measure of surprise by comparing its expectations to video footage showing impossible events, such as something disappearing.

The decision to bring AI into the mix when studying how human babies learn is an important step in the research field. A team behind the model has expressed that they hope improved versions of Plato could eventually be used by cognitive scientists to seriously model the behavior of infants. This breakthrough has significant implications for our understanding of child development and learning, and may lead to new insights into how we can support young children's cognitive growth.

**The Power of Human Language: Universality Across Cultures**

Researchers from Harvard University have made a groundbreaking discovery in their study of human language and its universal aspects. The team collected over 1,600 recordings of human speech and song from 21 societies across six continents and applied lasso and mixed effects regression models to classify whether the recordings were infant or adult-directed.

The results showed that acoustic features consistently differed between infant and adult-directed recordings in both groups. Infant-directed recordings had purer and less harsh vocal timbers, more vowel sounds, and higher pitched speech. The researchers argued that these findings show that despite variation in language, music, and infant care practices worldwide, when people speak to an infant or sing to an upset baby, they change the way they speak and sing in similar and mutually intelligible ways across cultures.

This discovery has significant implications for our understanding of human communication and its universal aspects. It suggests that, regardless of cultural differences, there are fundamental patterns in how humans communicate with infants and young children. This finding highlights the importance of studying human language and behavior to better understand our shared humanity and develop more effective methods for supporting child development and learning.

**Conclusion**

This week's Roundup has brought us stories of innovation and breakthroughs in various fields, from drone racing and AI in medical diagnosis to understanding how babies learn and the power of human language. Each of these discoveries has the potential to transform our lives and improve our understanding of the world around us. As we continue to push the boundaries of technology and scientific research, it is exciting to consider what the future holds for humanity.

"WEBVTTKind: captionsLanguage: enhi there everyone welcome to this week's episode of the weekly Roundup the top stories in data science this week center around how AI can help us better understand babies and whether or not humans working with AI can perform better than just AI by itself make sure you're subscribed to our channel so that you never miss an update on the latest and greatest in data science in our first story this week a drone using Advanced computer vision AI has managed to beat the world's best drone pilots in a race last year autonomous racing drones from the University of Zurich made headlines when they proved they could be Thomas beatmatter two-time winner of the multi GP International World Cup of drone racing in spite of this feat the drones had what many called an unfair Advantage given that they were using a motion capture system to capture real-time position information and feed this into the drone's navigation system this time however the Drone was only allowed to use onboard computer vision systems and had access to the same information as the Drone pilot practice on the course and a live video feed three world-class human Pilots were invited to Zurich for this race along with Thomas bidmata the University of Zurich hosted Alex vanova and Martin sharper both world-class drone Pilots with various competitive titles the top speed of 80 kilometers an hour the vision-based autonomous drone outraced the fastest human by 0.5 seconds during the three lad breaks for context the usual difference between first and second in these races is one or two tenths of a second with more details about the race to be published soon the AI team is excited about what this result means for both drone racing as a sport and the possibilities of computer vision when applied to a less controllable environment than a race track while AI might be definitively better than humans when it comes to drone racing new data is showing that Two Is Better Than One when humans and AI team up to catch life-threatening conditions like breast cancer a large-scale study published this month in the Lancer digital health is the first of its kind to directly compare an ai's performance in breast cancer screening according to whether it's used alone or to assist a human expert the study determined that AI alone performed worse than a radiologist but when AI referred cases that it was unsure about the radiologist the combined team of AI and the doctor was 2.6 better at detecting breast cancer than a doctor working alone and raised fewer false alarms CODIS langloths director of Stanford center for artificial intelligence medicine and imaging says we often say that AI will not replace Radiologists this study doesn't change that but in the proposed AI driven process nearly three quarters of the screening studies didn't need to be reviewed by a radiologist while improving accuracy overall that is groundbreaking what do you think should we be looking forward to a future in which AI helps us or scared of a future in which it replaces us let us know in the comments our next story takes a look at the first step towards introducing neural networks into our understanding of how babies learn deepmind has developed a neural network that's trained to understand basic properties of objects and then reacts with surprise when a scenario differs from its expectations this is designed to mirror the way developmental psychologists test how babies understand the motion of objects by tracking their gaze when shown a video of for example a ball that suddenly disappears The Children Express surprise which researchers quantify by measuring how long the children stare in a particular direction this the software model named physics learning through Auto encoding and tracking objects or more simply Plato was trained with videos showing simple mechanisms such as a ball rolling down a slope or two balls bouncing off each other the model developed the ability to predict how those objects would behave in different situations and then developed a measure of Surprise by comparing its expectations to video footage showing impossible living such as something disappearing why Plato isn't designed as a model of infant Behavior the decision to bring AI into the mix when studying how human babies learn is an important step in the research field a team behind the model have expressed that they hope improved versions of Plato could eventually be used by cognitive scientists to seriously model the behavior of infants our final story this week covers findings from a team led by Harvard University postdoctoral fellow Courtney Hilton that showed that the way we talk to babies transcends language cultural and geographical barriers the research teams involved 40 International collaborators and collected over 1600 recordings of human speech and song from 21 societies across six continents applying lasso and mixed effects regression models the researchers were able to classify whether recordings were infant or adult directed on the basis of their acoustic features they found that acoustic features consistently differed between infant and adult directed recordings in both groups infant directed recordings had purer and less harsh vocal Timbers more vowel sounds and higher pitted speech the researchers argued that the findings show that despite variation in language music and infant care practices worldwide when people speak to an infant or sing to an upset baby they change the way that they speak and sing in similar and mutually intelligible ways across cultures that's all for this week's Roundup thanks for watching we hope you enjoyed the news this week and we can't wait to see you next week have a great onehi there everyone welcome to this week's episode of the weekly Roundup the top stories in data science this week center around how AI can help us better understand babies and whether or not humans working with AI can perform better than just AI by itself make sure you're subscribed to our channel so that you never miss an update on the latest and greatest in data science in our first story this week a drone using Advanced computer vision AI has managed to beat the world's best drone pilots in a race last year autonomous racing drones from the University of Zurich made headlines when they proved they could be Thomas beatmatter two-time winner of the multi GP International World Cup of drone racing in spite of this feat the drones had what many called an unfair Advantage given that they were using a motion capture system to capture real-time position information and feed this into the drone's navigation system this time however the Drone was only allowed to use onboard computer vision systems and had access to the same information as the Drone pilot practice on the course and a live video feed three world-class human Pilots were invited to Zurich for this race along with Thomas bidmata the University of Zurich hosted Alex vanova and Martin sharper both world-class drone Pilots with various competitive titles the top speed of 80 kilometers an hour the vision-based autonomous drone outraced the fastest human by 0.5 seconds during the three lad breaks for context the usual difference between first and second in these races is one or two tenths of a second with more details about the race to be published soon the AI team is excited about what this result means for both drone racing as a sport and the possibilities of computer vision when applied to a less controllable environment than a race track while AI might be definitively better than humans when it comes to drone racing new data is showing that Two Is Better Than One when humans and AI team up to catch life-threatening conditions like breast cancer a large-scale study published this month in the Lancer digital health is the first of its kind to directly compare an ai's performance in breast cancer screening according to whether it's used alone or to assist a human expert the study determined that AI alone performed worse than a radiologist but when AI referred cases that it was unsure about the radiologist the combined team of AI and the doctor was 2.6 better at detecting breast cancer than a doctor working alone and raised fewer false alarms CODIS langloths director of Stanford center for artificial intelligence medicine and imaging says we often say that AI will not replace Radiologists this study doesn't change that but in the proposed AI driven process nearly three quarters of the screening studies didn't need to be reviewed by a radiologist while improving accuracy overall that is groundbreaking what do you think should we be looking forward to a future in which AI helps us or scared of a future in which it replaces us let us know in the comments our next story takes a look at the first step towards introducing neural networks into our understanding of how babies learn deepmind has developed a neural network that's trained to understand basic properties of objects and then reacts with surprise when a scenario differs from its expectations this is designed to mirror the way developmental psychologists test how babies understand the motion of objects by tracking their gaze when shown a video of for example a ball that suddenly disappears The Children Express surprise which researchers quantify by measuring how long the children stare in a particular direction this the software model named physics learning through Auto encoding and tracking objects or more simply Plato was trained with videos showing simple mechanisms such as a ball rolling down a slope or two balls bouncing off each other the model developed the ability to predict how those objects would behave in different situations and then developed a measure of Surprise by comparing its expectations to video footage showing impossible living such as something disappearing why Plato isn't designed as a model of infant Behavior the decision to bring AI into the mix when studying how human babies learn is an important step in the research field a team behind the model have expressed that they hope improved versions of Plato could eventually be used by cognitive scientists to seriously model the behavior of infants our final story this week covers findings from a team led by Harvard University postdoctoral fellow Courtney Hilton that showed that the way we talk to babies transcends language cultural and geographical barriers the research teams involved 40 International collaborators and collected over 1600 recordings of human speech and song from 21 societies across six continents applying lasso and mixed effects regression models the researchers were able to classify whether recordings were infant or adult directed on the basis of their acoustic features they found that acoustic features consistently differed between infant and adult directed recordings in both groups infant directed recordings had purer and less harsh vocal Timbers more vowel sounds and higher pitted speech the researchers argued that the findings show that despite variation in language music and infant care practices worldwide when people speak to an infant or sing to an upset baby they change the way that they speak and sing in similar and mutually intelligible ways across cultures that's all for this week's Roundup thanks for watching we hope you enjoyed the news this week and we can't wait to see you next week have a great one\n"