Tag Archives: hard
#439527 It’s (Still) Really Hard for Robots to ...
Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.
In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.
PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.
First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.
The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:
Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.
Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
Photos: EASE
While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.
The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading
#439327 It’s (Still) Really Hard for Robots to ...
Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.
In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.
PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.
First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.
The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:
Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.
Photos: EASE
Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.
The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading
#439105 This Robot Taught Itself to Walk in a ...
Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
This likely isn’t the first robot video you’ve seen, nor the most polished.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.
Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie.
And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have been hard at work applying machine learning to robotics.
Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.
And in the meantime, Boston Dynamics bots are testing the commercial waters.
Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”
The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.
Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading