menu
close

AI Robots Master Human-Like Agility in Breakthrough Demo

A groundbreaking AI robot developed by researchers at ETH Zurich has demonstrated remarkable ability to play badminton with humans, showcasing advanced anticipation and strategy adjustment capabilities. The quadruped robot, named ANYmal-D, uses sophisticated vision systems, sensor data, and machine learning to track, predict, and respond to shuttlecock trajectories in real-time. This development represents a significant leap forward in human-robot collaboration, with implications extending beyond recreation into training, manufacturing, and service industries.
AI Robots Master Human-Like Agility in Breakthrough Demo

The first week of July 2025 has witnessed a significant breakthrough in artificial intelligence and robotics, as researchers demonstrate machines with unprecedented abilities to anticipate movements and adjust strategies in dynamic environments.

At the center of this advancement is ANYmal-D, a four-legged robot developed by ETH Zurich that can autonomously play badminton with human opponents. The robot employs an innovative control system powered by reinforcement learning that enables it to track, predict, and skillfully return shuttlecocks. Its sophisticated "brain" allows it to follow the trajectory of the shuttlecock, anticipate its path, and move swiftly across the court to intercept and return it. This achievement, detailed in the journal Science Robotics, showcases the potential for deploying legged robots in dynamic tasks requiring precise perception and rapid, full-body responses.

The robot is equipped with a stereo camera for vision-based perception and a dynamic arm to wield a badminton racket, demanding precise synchronization of perception, locomotion, and arm movements. Researchers trained the system using reinforcement learning, enabling the robot to develop effective strategies through experimentation and interaction with its environment. In tests against human players, ANYmal-D demonstrated its ability to navigate the court effectively, returning shots at varying speeds and angles, and sustaining rallies of up to 10 consecutive shots.

This breakthrough represents more than just a technological curiosity. The quadruped robot uses vision, sensor data, and machine learning to anticipate movements and adjust its strategy, showcasing the future of human-robot collaboration in sports and training. The project blends physical robotics with advanced AI reasoning, opening new possibilities for machines that can work alongside humans in complex, unpredictable environments.

Roboticists have made major breakthroughs in how robots learn and adapt. One key advancement involves combining different types of data to make it useful for robots. For example, researchers can collect data from humans performing tasks while wearing sensors, combine it with teleoperation data from humans using robotic arms, and supplement this with internet images and videos of people performing similar actions. By merging these data sources into new AI models, robots gain a massive head start over those trained with traditional methods. Seeing multiple ways to accomplish a single task makes it easier for AI models to improvise and determine appropriate next moves in real-world situations. This represents a fundamental shift in how robots learn.

This is a significant aspect of AI manufacturing today. Breakthroughs in reinforcement learning have enabled physical robots to make decisions and perform intricate physical tasks, from hanging t-shirts on coat hangers to making pizza dough. This fusion of generative AI and robots has radically expanded potential applications in business, healthcare, education, and entertainment, suggesting a future where intelligent machines seamlessly integrate into our daily lives.

Source:

Latest News