Tag Archives: do

#439537 Tencent’s New Wheeled Robot Flicks Its ...

Ollie (I think its name is Ollie) is a “a novel wheel-legged robot” from Tencent Robotics. The word “novel” is used quite appropriately here, since Ollie sports some unusual planar parallel legs atop driven wheels. It’s also got a multifunctional actuated tail that not only enables some impressive acrobatics, but also allows the robot to transition from biped-ish to triped-ish to stand up extra tall and support a coffee-carrying manipulator.

It’s a little disappointing that the tail only appears to be engaged for specific motions—it doesn’t seem like it’s generally part of the robot’s balancing or motion planning, which feels like a missed opportunity. But this robot is relatively new, and its development is progressing rapidly, which we know because an earlier version of the hardware and software was presented at ICRA 2021 a couple weeks back. Although, to be honest with you, there isn’t a lot of info on the new one besides the above video, so we’ll be learning what we can from the ICRA paper.

The paper is mostly about developing a nonlinear balancing controller for the robot, and they’ve done a bang-up job with it, with the robot remaining steady even while executing sequences of dynamic motions. The jumping and one-legged motions are particularly cool to watch. And, well, that’s pretty much it for the ICRA paper, which (unfortunately) barely addresses the tail at all, except to say that currently the control system assumes that the tail is fixed. We’re guessing that this is just a symptom of the ICRA paper submission deadline being back in October, and that a lot of progress has been made since then.

Seeing the arm and sensor package at the end of the video is a nod to some sort of practical application, and I suppose that the robot’s ability to stand up to reach over that counter is some justification for using it for a delivery task. But it seems like it’s got so much more to offer, you know? Many far more boring platforms robots could be delivering coffee, so let’s find something for this robot to do that involves more backflips.

Balance Control of a Novel Wheel-legged Robot: Design and Experiments, by Shuai Wang, Leilei Cui, Jingfan Zhang, Jie Lai, Dongsheng Zhang, Ke Chen, Yu Zheng, Zhengyou Zhang, and Zhong-Ping Jiang from Tencent Robotics X, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439527 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
Photos: EASE

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439495 Legged Robots Do Surprisingly Well in ...

Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.

In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.

Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean. Sort of like pronk, except in space. Let’s make it so!

Just look at that robot bok!

This reorientation technique was developed using deep reinforcement learning, and then transferred from simulation to a real SpaceBok robot, albeit in two degrees of freedom rather than three. The real challenge with this method is just how complicated things get when you start wiggling multiple limbs in the air trying to get to a specific configuration, since the dynamics here are (as the paper puts it) “highly non-linear,” and it proved somewhat difficult to even simulate everything well enough. What you see in the simulation, incidentally, is an environment similar to Ceres, the largest asteroid in the asteroid belt, which has a surface gravity of 0.03g.

Although SpaceBok has “space” right in the name, it’s not especially optimized for this particular kind of motion. As the video shows, having an actuated hip joint could make the difference between a reliable soft landing and, uh, not. Not landing softly is a big deal, because an uncontrolled bounce could send the robot flying huge distances, which is what happened to the Philae lander on comet 67P/Churyumov–Gerasimenko back in 2014.

For more details on SpaceBok’s space booking, we spoke with the paper’s first author, Nikita Rudin, via email.

IEEE Spectrum: Why are legs ideal for mobility in low gravity environments?

Rudin: In low gravity environments, rolling on wheels becomes more difficult because of reduced traction. However, legs can exploit the low gravity and use high jumps to move efficiently. With high jumps, you can also clear large obstacles along the way, which is harder to do in higher gravity.

Were there unique challenges to training your controller in 2D and 3D relative to training controllers for terrestrial legged robot motion?

The main challenge is the long flight phase, which is not present in terrestrial locomotion. In earth gravity, robots (and animals) use reaction forces from the ground to balance. During a jump, they don't usually need to re-orient themselves. In the case of low gravity, we have extended flight phases (multiple seconds) and only short contacts with the ground. The robot needs to be able to re-orient / balance in the air. Otherwise, a small disturbance at the moment of the jump will slowly flip the robot. In short, in low gravity, there is a new control problem that can be neglected on Earth.

Besides the addition of a hip joint, what other modifications would you like to make to the robot to enhance its capabilities? Would a tail be useful, for example? Or very heavy shoes?

A tail is a very interesting idea and heavy shoes would definitely help, however, they increase the total weight, which is costly in space. We actually add some minor weight to feet already (in the paper we analyze the effect of these weights). Another interesting addition would be a joint in the center of the robot allowing it to do cat-like backbone torsion.

How does the difficulty of this problem change as the gravity changes?

With changing gravity you change the importance of mid-air re-orientation compared to ground contacts. For locomotion, low-gravity is harder from the reasoning above. However, if the robot is dropped and needs to perform a flip before landing, higher gravity is harder because you have less time for the whole process.

What are you working on next?

We have a few ideas for the next projects including a legged robot specifically designed and certified for space and exploring cat-like re-orientation on earth with smaller/faster robots. We would also like to simulate a zero-g environment on earth by dropping the robot from a few dozens of meters into a safety net, and of course, a parabolic flight is still very much one of our objectives. However, we will probably need a smaller robot there as well.

Cat-Like Jumping and Landing of Legged Robots in Low Gravity Using Deep Reinforcement Learning, by Nikita Rudin, Hendrik Kolvenbach, Vassilios Tsounis, and Marco Hutter from ETH Zurich, is published in IEEE Transactions on Robotics. Continue reading

Posted in Human Robots

#439372 Legged Robots Do Surprisingly Well in ...

Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.

In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.

Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean. Sort of like pronk, except in space. Let’s make it so!

Just look at that robot bok!

This reorientation technique was developed using deep reinforcement learning, and then transferred from simulation to a real SpaceBok robot, albeit in two degrees of freedom rather than three. The real challenge with this method is just how complicated things get when you start wiggling multiple limbs in the air trying to get to a specific configuration, since the dynamics here are (as the paper puts it) “highly non-linear,” and it proved somewhat difficult to even simulate everything well enough. What you see in the simulation, incidentally, is an environment similar to Ceres, the largest asteroid in the asteroid belt, which has a surface gravity of 0.03g.

Although SpaceBok has “space” right in the name, it’s not especially optimized for this particular kind of motion. As the video shows, having an actuated hip joint could make the difference between a reliable soft landing and, uh, not. Not landing softly is a big deal, because an uncontrolled bounce could send the robot flying huge distances, which is what happened to the Philae lander on comet 67P/Churyumov–Gerasimenko back in 2014.

For more details on SpaceBok’s space booking, we spoke with the paper’s first author, Nikita Rudin, via email.

IEEE Spectrum: Why are legs ideal for mobility in low gravity environments?

Rudin: In low gravity environments, rolling on wheels becomes more difficult because of reduced traction. However, legs can exploit the low gravity and use high jumps to move efficiently. With high jumps, you can also clear large obstacles along the way, which is harder to do in higher gravity.

Were there unique challenges to training your controller in 2D and 3D relative to training controllers for terrestrial legged robot motion?

The main challenge is the long flight phase, which is not present in terrestrial locomotion. In earth gravity, robots (and animals) use reaction forces from the ground to balance. During a jump, they don't usually need to re-orient themselves. In the case of low gravity, we have extended flight phases (multiple seconds) and only short contacts with the ground. The robot needs to be able to re-orient / balance in the air. Otherwise, a small disturbance at the moment of the jump will slowly flip the robot. In short, in low gravity, there is a new control problem that can be neglected on Earth.

Besides the addition of a hip joint, what other modifications would you like to make to the robot to enhance its capabilities? Would a tail be useful, for example? Or very heavy shoes?

A tail is a very interesting idea and heavy shoes would definitely help, however, they increase the total weight, which is costly in space. We actually add some minor weight to feet already (in the paper we analyze the effect of these weights). Another interesting addition would be a joint in the center of the robot allowing it to do cat-like backbone torsion.

How does the difficulty of this problem change as the gravity changes?

With changing gravity you change the importance of mid-air re-orientation compared to ground contacts. For locomotion, low-gravity is harder from the reasoning above. However, if the robot is dropped and needs to perform a flip before landing, higher gravity is harder because you have less time for the whole process.

What are you working on next?

We have a few ideas for the next projects including a legged robot specifically designed and certified for space and exploring cat-like re-orientation on earth with smaller/faster robots. We would also like to simulate a zero-g environment on earth by dropping the robot from a few dozens of meters into a safety net, and of course, a parabolic flight is still very much one of our objectives. However, we will probably need a smaller robot there as well.

Cat-Like Jumping and Landing of Legged Robots in Low Gravity Using Deep Reinforcement Learning, by Nikita Rudin, Hendrik Kolvenbach, Vassilios Tsounis, and Marco Hutter from ETH Zurich, is published in IEEE Transactions on Robotics. Continue reading

Posted in Human Robots

#439327 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Photos: EASE

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots