Tag Archives: commercial
#439105 This Robot Taught Itself to Walk in a ...
Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
This likely isn’t the first robot video you’ve seen, nor the most polished.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.
Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie.
And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have been hard at work applying machine learning to robotics.
Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.
And in the meantime, Boston Dynamics bots are testing the commercial waters.
Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”
The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.
Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading
#439032 To Learn To Deal With Uncertainty, This ...
AI is endowing robots, autonomous vehicles and countless of other forms of tech with new abilities and levels of self-sufficiency. Yet these models faithfully “make decisions” based on whatever data is fed into them, which could have dangerous consequences. For instance, if an autonomous car is driving down a highway and the sensor picks up a confusing signal (e.g., a paint smudge that is incorrectly interpreted as a lane marking), this could cause the car to swerve into another lane unnecessarily.
But in the ever-evolving world of AI, researchers are developing new ways to address challenges like this. One group of researchers has devised a new algorithm that allows the AI model to account for uncertain data, which they describe in a study published February 15 in IEEE Transactions on Neural Networks and Learning Systems.
“While we would like robots to work seamlessly in the real world, the real world is full of uncertainty,” says Michael Everett, a post-doctoral associate at MIT who helped develop the new approach. “It's important for a system to be aware of what it knows and what it is unsure about, which has been a major challenge for modern AI.”
His team focused on a type of AI called reinforcement learning (RL), whereby the model tries to learn the “value” of taking each action in a given scenario through trial-and-error. They developed a secondary algorithm, called Certified Adversarial Robustness for deep RL (CARRL), that can be built on top of an existing RL model.
“Our key innovation is that rather than blindly trusting the measurements, as is done today [by AI models], our algorithm CARRL thinks through all possible measurements that could have been made, and makes a decision that considers the worst-case outcome,” explains Everett.
In their study, the researchers tested CARRL across several different tasks, including collision avoidance simulations and Atari pong. For younger readers who may not be familiar with it, Atari pong is a classic computer game whereby an electronic paddle is used to direct a ping pong on the screen. In the test scenario, CARRL helped move the paddle slightly higher or lower to compensate for the possibility that the ball could approach at a slightly different point than what the input data indicated. All the while, CARRL would try to ensure that the ball would make contact with at least some part of paddle.
Gif: MIT Aerospace Controls Laboratory
In a perfect world, the information that an AI model is fed would be accurate all the time and AI model will perform well (left). But in some cases, the AI may be given inaccurate data, causing it to miss its targets (middle). The new algorithm CARRL helps AIs account for uncertainty in its data inputs, yielding a better performance when relying on poor data (right).
Across all test scenarios, the RL model was better at compensating for potential inaccurate or “noisy” data with CARRL, than without CARRL.
But the results also show that, like with humans, too much self-doubt and uncertainty can be unhelpful. In the collision avoidance scenario, for example, indulging in too much uncertainty caused the main moving object in the simulation to avoid both the obstacle and its goal. “There is definitely a limit to how ‘skeptical’ the algorithm can be without becoming overly conservative,” Everett says.
This research was funded by Ford Motor Company, but Everett notes that it could be applicable under many other commercial applications requiring safety-aware AI, including aerospace, healthcare, or manufacturing domains.
“This work is a step toward my vision of creating ‘certifiable learning machines’—systems that can discover how to explore and perform in the real world on their own, while still having safety and robustness guarantees,” says Everett. “We'd like to bring CARRL into robotic hardware while continuing to explore the theoretical challenges at the interface of robotics and AI.” Continue reading
#439012 Video Friday: Man-Machine Synergy ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.
From the look of things, the next generation will be able to move around. Whoa.
[ MMSE ]
This method of loading and unloading AMRs without having them ever stop moving is so obvious that there must be some equally obvious reason why I've never seen it done in practice.
The LoadRunner is able to transport and sort parcels weighing up to 30 kilograms. This makes it the perfect luggage carrier for airports. These AI-driven go-carts can also work in concert as larger collectives to carry large, heavy and bulky objects. Every LoadRunner can also haul up to four passive trailers. Powered by four electric motors, the LoadRunner sharply brakes at just the right moment right in front of its destination and the payload slides from the robot onto the delivery platform.
[ Fraunhofer ] via [ Gizmodo ]
Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.
[ Paper ]
Thanks Ayato!
The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them!
This could totally happen in real life, and we need to be prepared for it!
[ DodgeDrone Challenge ]
In addition to winning the Best Student Design Competition CREATIVITY Award at HRI 2021, this paper would also have won the Best Paper Title award, if that award existed.
[ Paper ]
Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments.
We show that the robot exploits its training to effectively transition between different morphological configurations, exhibiting substantial performance improvements over a non-adaptive approach. The demonstrated benefits of real-world morphological adaptation demonstrate the potential for a new embodied way of incorporating adaptation into future robotic designs.
[ Nature ]
A drone video shot in a Minneapolis bowling alley was hailed as an instant classic. One Hollywood veteran said it “adds to the language and vocabulary of cinema.” One IEEE Spectrum editor said “hey that's pretty cool.”
[ Bryant Lake Bowl ]
It doesn't take a robot to convince me to buy candy, but I think if I buy candy from Relay it's a business expense, right?
[ RIS ]
DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.
[ DARPA ACE ]
Unitree Robotics has realized that the Empire needs to be overthrown!
[ Unitree ]
Windhover Labs, an emerging leader in open and reliable flight software and hardware, announces the upcoming availability of its first hardware product, a low cost modular flight computer for commercial drones and small satellites.
[ Windhover ]
As robots and autonomous systems are poised to become part of our everyday lives, the University of Michigan and Ford are opening a one-of-a-kind facility where they’ll develop robots and roboticists that help make lives better, keep people safer and build a more equitable society.
[ U Michigan ]
The adaptive robot Rizon combined with a new hybrid electrostatic and gecko-inspired gripping pad developed by Stanford BDML can manipulate bulky, non-smooth items in the most effort-saving way, which broadens the applications in retail and household environments.
[ Flexiv ]
Thanks Yunfan!
I don't know why anyone would want things to get MORE icy, but if you do for some reason, you can make it happen with a Husky.
Is winter over yet?
[ Clearpath ]
Skip ahead to about 1:20 to see a pair of Gita robots following a Spot following a human like a chain of lil’ robot duckings.
[ PFF ]
Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976 (!)
[ Tachi Lab ]
Thanks Fan!
If you missed Chad Jenkins' talk “That Ain’t Right: AI Mistakes and Black Lives” last time, here's another opportunity to watch from Robotics Today, and it includes a top notch panel discussion at the end.
[ Robotics Today ]
Since its founding in 1979, the Robotics Institute (RI) at Carnegie Mellon University has been leading the world in robotics research and education. In the mid 1990s, RI created NREC as the applied R&D center within the Institute with a specific mission to apply robotics technology in an impactful way on real-world applications. In this talk, I will go over numerous R&D programs that I have led at NREC in the past 25 years.
[ CMU ] Continue reading