Tag Archives: good
#439237 Agility Robotics’ Cassie Is Now ...
Bipedal robots are a huge hassle. They’re expensive, complicated, fragile, and they spend most of their time almost but not quite falling over. That said, bipeds are worth it because if you want a robot to go everywhere humans go, the conventional wisdom is that the best way to do so is to make robots that can walk on two legs like most humans do. And the most frequent, most annoying two-legged thing that humans do to get places? Going up and down stairs.
Stairs have been a challenge for robots of all kinds (bipeds, quadrupeds, tracked robots, you name it) since, well, forever. And usually, when we see bipeds going up or down stairs nowadays, it involves a lot of sensing, a lot of computation, and then a fairly brittle attempt that all too often ends in tears for whoever has to put that poor biped back together again.
You’d think that the solution to bipedal stair traversal would just involve better sensing and more computation to model the stairs and carefully plan footsteps. But an approach featured in upcoming Robotics Science and Systems conference paper from Oregon State University and Agility Robotics does away will all of that out and instead just throws a Cassie biped at random outdoor stairs with absolutely no sensing at all. And it works spectacularly well.
A couple of things to bear in mind: Cassie is “blind” in the sense that it has no information about the stairs that it’s going up or down. The robot does get proprioceptive feedback, meaning that it knows what kind of contact its limbs are making with the stairs. Also, the researchers do an admirable job of keeping that safety tether slack, and Cassie isn’t being helped by it in the least—it’s just there to prevent a catastrophic fall.
What really bakes my noodle about this video is how amazing Cassie is at being kind of terrible at stair traversal. The robot is a total klutz: it runs into railings, stubs its toes, slips off of steps, misses steps completely, and occasionally goes backwards. Amazingly, Cassie still manages not only to not fall, but also to keep going until it gets where it needs to be.
And this is why this research is so exciting—rather than try to develop some kind of perfect stair traversal system that relies on high quality sensing and a lot of computation to optimally handle stairs, this approach instead embraces real-world constraints while managing to achieve efficient performance that’s real-world robust, if perhaps not the most elegant.
The secret to Cassie’s stair mastery isn’t much of a secret at all, since there’s a paper about it on arXiv. The researchers used reinforcement learning to train a simulated Cassie on permutations of stairs based on typical city building codes, with sets of stairs up to eight individual steps. To transfer the learned stair-climbing strategies (referred to as policies) effectively from simulation to the real world, the simulation included a variety of disturbances designed to represent the kinds of things that are hard to simulate accurately. For example, Cassie had its simulated joints messed with, its simulated processing speed tweaked, and even the simulated ground friction was jittered around. So, even though the simulation couldn’t perfectly mimic real ground friction, randomly mixing things up ensures that the controller (the software telling the robot how to move) gains robustness to a much wider range of situations.
One peculiarity of using reinforcement learning to train a robot is that even if you come up with something that works really well, it’s sometimes unclear exactly why. You may have noticed in the first video that the researchers are only able to hypothesize about the reasons for the controller’s success, and we asked one of the authors, Kevin Green, to try and explain what’s going on:
“Deep reinforcement learning has similar issues that we are seeing in a lot of machine learning applications. It is hard to understand the reasoning for why a learned controller performs certain actions. Is it exploiting a quirk of your simulation or your reward function? Is it perhaps stuck in a local minima? Sometimes the reward function is not specific enough and the policy can exhibit strange, vestigial behaviors simply because they are not rewarded or penalized. On the other hand, a reward function can be too constraining and can lead to a policy which doesn’t fully explore the space of possible actions, limiting performance. We do our best to ensure our simulation is accurate and that our rewards are objective and descriptive. From there, we really act more like biomechanists, observing a functioning system for hints as to the strategies that it is using to be highly successful.”
One of the strategies that they observed, first author Jonah Siekmann told us, is that Cassie does better on stairs when it’s moving faster, which is a bit of a counterintuitive thing for robots generally:
“Because the robot is blind, it can choose very bad foot placements. If it tries to place its foot on the very corner of a stair and shift its weight to that foot, the resulting force pushes the robot back down the stairs. At walking speed, this isn’t much of an issue because the robot’s momentum can overcome brief moments where it is being pushed backwards. At low speeds, the momentum is not sufficient to overcome a bad foot placement, and it will keep getting knocked backwards down the stairs until it falls. At high speeds, the robot tends to skip steps which pushes the robot closer to (and sometimes over) its limits.”
These bad foot placements are what lead to some of Cassie’s more impressive feats, Siekmann says. “Some of the gnarlier descents, where Cassie skips a step or three and recovers, were especially surprising. The robot also tripped on ascent and recovered in one step a few times. The physics are complicated, so to see those accurate reactions embedded in the learned controller was exciting. We haven’t really seen that kind of robustness before.” In case you’re worried that all of that robustness is in video editing, here’s an uninterrupted video of ten stair ascents and ten stair descents, featuring plenty of gnarliness.
We asked the researchers whether Cassie is better at stairs than a blindfolded human would be. “It’s difficult to say,” Siekmann told us. “We’ve joked lots of times that Cassie is superhuman at stair climbing because in the process of filming these videos we have tripped going up the stairs ourselves while we’re focusing on the robot or on holding a camera.”
A robot being better than a human at a dynamic task like this is obviously a very high bar, but my guess is that most of us humans are actually less prepared for blind stair navigation than Cassie is, because Cassie was explicitly trained on stairs that were uneven: “a small amount of noise (± 1cm) is added to the rise and run of each step such that the stairs are never entirely uniform, to prevent the policy from deducing the precise dimensions of the stairs via proprioception and subsequently overfitting to perfectly uniform stairs.” Speaking as someone who just tried jogging up my stairs with my eyes closed in the name of science, I absolutely relied on the assumption that my stairs were uniform. And when humans can’t rely on assumptions like that, it screws us up, even if we have eyeballs equipped.
Like most robot-y things, Cassie is operating under some significant constraints here. If Cassie seems even stompier than it usually is, that’s because it’s using this specific stair controller which is optimized for stairs and stair-like things but not much else.
“When you train neural networks to act as controllers, over time the learning algorithm refines the network so that it maximizes the reward specific to the environment that it sees,” explains Green. “This means that by training on flights of stairs, we get a very different looking controller compared to training on flat ground.” Green says that the stair controller works fine on flat ground, it’s just less efficient (and noisier). They’re working on ways of integrating multiple gait controllers that the robot can call on depending on what it’s trying to do; conceivably this might involve some very simple perception system just to tell the robot “hey look, there are some stairs, better engage stair mode.”
The paper ends with the statement that “this work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie.” I’m certainly surprised at Cassie’s stair capabilities, and it’ll be exciting to see what other environments this technique can be applied to. If there are limits, I’m sure that Cassie is going to try and find them.
Blind Bipedal Stair Traversal via Sim-to-Real Reinforcement Learning, by Jonah Siekmann, Kevin Green, John Warila, Alan Fern, and Jonathan Hurst from Oregon State University and Agility Robotics, will be presented at RSS 2021 in July. Continue reading
#439187 Video Friday: Good Robots for Bad Knees
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
ICRA 2021 – May 30-5, 2021 – [Online Event]
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Ascend is a smart knee orthosis designed to improve mobility and relieve knee pain. The customized, lightweight, and comfortable design reduces burden on the knee and intuitively adjusts support as needed. Ascend provides a safe and non-surgical solution for patients with osteoarthritis, knee instability, and/or weak quadriceps.
Each one of these is custom-built, and you can pre-order one now.
[ Roam Robotics ]
Ingenuity’s third flight achieved a longer flight time and more sideways movement than previously attempted. During the 80-second flight, the helicopter climbed to 16 feet (5 meters) and flew 164 feet (50 meters) downrange and back, for a total distance of 328 feet (100 meters). The third flight test took place at “Wright Brothers Field” in Jezero Crater, Mars, on April 25, 2021.
[ NASA ]
This right here, the future of remote work.
The robot will run you about $3,000 USD.
[ VStone ] via [ Robotstart ]
Texas-based aerospace robotics company, Wilder Systems, enhanced their existing automation capabilities to aid in the fight against COVID-19. Their recent development of a robotic testing system is both increasing capacity for COVID-19 testing and delivering faster results to individuals. The system conducts saliva-based PCR tests, which is considered the gold standard for COVID testing. Based on a protocol developed by Yale and authorized by the FDA, the system does not need additional approvals. This flexible, modular system can run up to 2,000 test samples per day, and can be deployed anywhere where standard electric power is available.
[ ARM Institute ]
Tests show that people do not like being nearly hit by drones.
But seriously, this research has resulted in some useful potential lessons for deploying drones in areas where they have a chance of interacting with humans.
[ Paper ]
The Ingenuity helicopter made history on April 19, 2021, with the first powered, controlled flight of an aircraft on another planet. How do engineers talk to a helicopter all the way out on Mars? We’ll hear about it from Nacer Chahat of NASA’s Jet Propulsion Laboratory, who worked on the helicopter’s antenna and telecommunication system.
[ NASA ]
A team of scientists from the Max Planck Institute for Intelligent Systems has developed a system with which they can fabricate miniature robots building block by building block, which function exactly as required.
[ Max Planck Institute ]
Well this was inevitable, wasn't it?
The pilot regained control and the drone was fine, though.
[ PetaPixel ]
NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on April 25, 2021, by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. As expected, the helicopter flew out of its field of vision while completing a flight plan that took it 164 feet (50 meters) downrange of the landing spot. Keep watching, the helicopter will return to stick the landing. Top speed for today's flight was about 2 meters per second, or about 4.5 miles-per-hour.
[ NASA ]
U.S. Naval Research Laboratory engineers recently demonstrated Hybrid Tiger, an electric unmanned aerial vehicle (UAV) with multi-day endurance flight capability, at Aberdeen Proving Grounds, Maryland.
[ NRL ]
This week's CMU RI Seminar is by Avik De from Ghost Robotics, on “Design and control of insect-scale bees and dog-scale quadrupeds.”
Did you watch the Q&A? If not, you should watch the Q&A.
[ CMU ]
Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. What does it take to make drones navigate as good or even better than human pilots?
[ GRASP Lab ]
With the current pandemic accelerating the revolution of AI in healthcare, where is the industry heading in the next 5-10 years? What are the key challenges and most exciting opportunities? These questions will be answered by HAI’s Co-Director, Fei-Fei Li and the Founder of DeepLearning.AI, Andrew Ng in this fireside chat virtual event.
[ Stanford HAI ]
Autonomous robots have the potential to serve as versatile caregivers that improve quality of life for millions of people with disabilities worldwide. Yet, physical robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and a lack of tools for benchmarking and training physically assistive robots. In this talk, I will present techniques towards addressing each of these core challenges in robotic caregiving.
[ GRASP Lab ]
What does it take to empower persons with disabilities, and why is educating ourselves on this topic the first step towards better inclusion? Why is developing assistive technologies for people with disabilities important in order to contribute to their integration in society? How do we implement the policies and actions required to enable everyone to live their lives fully? ETH Zurich and the Global Shapers Zurich Hub invited to an online dialogue on the topic “For a World without Barriers-Removing Obstacles in Daily Life for People with Disabilities.”
[ Cybathlon ] Continue reading
#439105 This Robot Taught Itself to Walk in a ...
Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
This likely isn’t the first robot video you’ve seen, nor the most polished.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.
Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie.
And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have been hard at work applying machine learning to robotics.
Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.
And in the meantime, Boston Dynamics bots are testing the commercial waters.
Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”
The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.
Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading