Tag Archives: is

#439693 Agility Robotics’ Digit is Getting ...

Agility Robotics' Digit humanoid has been taking a bit of a break from work during the pandemic. Most of what we've seen from Agility and Digit over the past year and a half has been decidedly research-y. Don't get me wrong, Digit's been busy making humans look bad and not falling over when it really should have done, but remember that Agility's goal is to make Digit into a useful, practical robot. It's not a research platform—as Agility puts it, Digit is intended to “accelerate business productivity and people's pursuit of a more fulfilling life.” As far as I can make out, this is a fancier way of saying that Digit should really be spending its time doing dull repetitive tasks so that humans don't have to, and in a new video posted today, the robot shows how it can help out with boring warehouse tote shuffling.

The highlights here for me are really in the combination of legged mobility and object manipulation. Right at the beginning of the video, you see Digit squatting all the way down, grasping a tote bin, shuffling backwards to get the bin out from under the counter, and then standing again. There's an unfortunate cut there, but the sequence is shown again at 0:44, and you can see how Digit pulls the tote towards itself and then regrasps it before lifting. Clever. And at 1:20, the robot gives a tote that it just placed on a shelf a little nudge with one arm to make sure it's in the right spot.

These are all very small things, but I think of them as highlights because all of the big things seem to be more or less solved in this scenario. Digit has no problem lifting things, walking around, and not mowing over the occasional human, and once that stuff is all sorted, whether the robot is able to effectively work in an environment like this is to some extent reflected in all of these other little human-obvious things that often make the difference between success and failure.
The clear question, though, is why Digit (or, more broadly, any bipedal robot) is the right robot to be doing this kind of job. There are other robots out there already doing tasks like these in warehouses, and they generally have wheeled bases and manipulation systems specifically designed to move totes and do nothing else. If you were to use one of those robots instead of Digit, my guess is that you'd pay less for it, it would be somewhat safer, and it would likely do the job more efficiently. Fundamentally, Digit can't out box-move a box-moving robot. But the critical thing to consider here is that as soon as you run out of boxes to move, Digit can do all kinds of other things thanks to its versatile humanoid design, while your box-moving robot can only sit in the corner and be sad until more boxes show up.
“We did not set out to build a humanoid robot. We set out to solve mobility.”
—Agility CTO Jonathan Hurst
“Digit is very, very flexible automation,” Agility CTO Jonathan Hurst told us when we asked him about this. “The value of what we're doing is in generality, and having a robot that's going be able to work carrying totes for three or four hours, then go unload boxes from trailers for three or four hours, keep up with you if you change your workflow entirely. Many of these spaces are designed specifically around the human form factor, and it's possible for a robot like Digit to do all of these different boring, repetitive jobs. And then when things get complicated, humans are still doing it.”
The value of having a human-like robot in a human environment comes into play as soon as you start thinking about typical warehouse situations that would be trivial for a human to solve but that are impossible for wheeled robots. For example, Hurst says that Digit is capable of using a stool to reach objects on high shelves. You could, of course, design a wheeled robot with an extension system to allow it to reach high shelves, but you're now adding more cost and complexity, and the whole point of a generalist humanoid robot is that in human environments, you just don't have to worry about environmental challenges. Or that's the idea, anyway, but as Hurst explains, the fact that Digit ended up with a mostly humanoid form factor was more like a side effect of designing with specific capabilities in mind:
We did not set out to build a humanoid robot. We set out to solve mobility, and we've been on a methodical path towards understanding physical interaction in the world. Agility started with our robot Cassie, and one of the big problems with Cassie was that we didn't have enough inertia in the robot's body to counteract the leg swinging forward, which is why Digit has an upright torso. We wanted to give ourselves more control authority in the yaw direction with Cassie, so we experimented with putting a tail on the robot, and it turns out that the best tail is a pair of bilaterally symmetrical tails, one on either side.
Our goal was to design a machine that can go where people go while manipulating things in the world, and we ended up with this kind of form factor. It's a very different path for us to have gotten here than the vast majority of humanoid robots, and there's an awful lot of subtlety that is in our machine that is absent in most other machines.IEEE Spectrum: So are you saying that Digit's arms sort of started out as tails to help Cassie with yaw control?
Jonathan Hurst: There are many examples like this—we've been going down this path where we find a solution to a problem like yaw control, and it happens to look like it does with animals, but it's also a solution that's optimal in several different ways, like physical interaction and being able to catch the robot when it falls. It's not like it's a compromise between one thing and another thing, it's straight up the right solution for these three different performance design goals.
Looking back, we started by asking, should we put a reaction wheel or a gyro on Cassie for yaw control? Well, that's just wasted mass. We could use a tail, and there are a lot of nice robots with tails, but usually they're for controlling pitch. It's the same with animals; if you look at lizards, they use their tails for mid-air reorienting to land on their feet after they jump. Cassie doesn't need a tail for that, but we only have a couple of small feet on the ground to work with. And if you look at other bipedal animals, every one of them has some other way of getting that yaw authority. If you watch an ostrich run, when it turns, it sticks its wing out to get the control that it needs.
And so all of these things just fall into place, and a bilaterally symmetrical pair of tails is the best way to control yaw in a biped. When you see Digit walking and its arms are swinging, that's not something that we added to make the motion look right. It looks right because it literally is right—it's the physics of mobility. And that's a good sign for us that we're on the right path to getting the performance that we want.
“We're going for general purpose, but starting with some of the easiest use cases.”
—Agility CTO Jonathan Hurst
Spectrum: We've seen Digit demonstrating very impressive mobility skills. Why are we seeing a demo in a semi-constrained warehouse environment instead of somewhere that would more directly leverage Digit's unique advantages?
Jonathan Hurst: It's about finding the earliest, most appropriate, and most valuable use cases. There's a lot to this robot, and we're not going to be just a tote packing robot. We're not building a specialized robot for this one application, but we have a couple of pretty big logistics partners who are interested in the flexibility and the manipulation capabilities of this machine. And yeah, what you're seeing now is the robot on a flattish floor, but it's also not going to be tripped up by a curb, or a step, or, a wire cover, or other things on the ground. You don't have to worry about anything like that. So next, it's an easy transition next to unloading trailers, where it's going to have to be stepping over gaps and up and down things and around boxes on the floor and stuff like that. We're going for general purpose, but starting with some of the easiest use cases.
Damion Shelton, CEO: We're trying to prune down the industry space, to get to something where there's a clear value proposition with a partner and deploying there. We can respect the difficulty of the general purpose use case and work to deploy early and profitably, as opposed to continuing to push for the outdoor applications. The blessing and the curse of the Ford opportunity is that it's super interesting, but also super hard. And so it's very motivating, and it's clear to us that that's where one of the ultimate opportunities is, but it's also far enough away from a deployment timeline that it just doesn't map on to a viable business model.
This is a point that every robotics company runs into sooner or later, where aspirations have to succumb to the reality of selling robots in a long-term sustainable way. It's definitely not a bad thing, it just means that we may have to adjust our expectations accordingly. No matter what kind of flashy cutting-edge capabilities your robot has, if it can't cost effectively do dull or dirty or dangerous stuff, nobody's going to pay you money for it. And cost effective usefulness is, arguably, one of the biggest challenges in bipedal robotics right now. In the past, I've been impressed by Digit's weightlifting skills, or its ability to climb steep and muddy hills. I'll be just as impressed when it starts making money for Agility by doing boring repetitive tasks in warehouses, because that means that Agility will be able to keep working towards those more complex, more exciting things. “It's not general manipulation, and we're not solving the grand challenges of robotics,” says Hurst. “Yet. But we're on our way.” Continue reading

Posted in Human Robots

#439559 MIT is Building a Dynamic, Acrobatic ...

For a long time, having a bipedal robot that could walk on a flat surface without falling over (and that could also maybe occasionally climb stairs or something) was a really big deal. But we’re more or less past that now. Thanks to the talented folks at companies like Agility Robotics and Boston Dynamics, we now expect bipedal robots to meet or exceed actual human performance for at least a small subset of dynamic tasks. The next step seems to be to find ways of pushing the limits of human performance, which it turns out means acrobatics. We know that IHMC has been developing their own child-size acrobatic humanoid named Nadia, and now it sounds like researchers from Sangbae Kim’s lab at MIT are working on a new acrobatic robot of their own.

We’ve seen a variety of legged robots from MIT’s Biomimetic Robotics Lab, including Cheetah and HERMES. Recently, they’ve been doing a bunch of work with their spunky little Mini Cheetahs (developed with funding and support from Naver Labs), which are designed for some dynamic stuff like gait exploration and some low-key four-legged acrobatics.

In a paper recently posted to arXiv (to be presented at Humanoids 2020 in July), Matthew Chignoli, Donghyun Kim, Elijah Stanger-Jones, and Sangbae Kim describe “a new humanoid robot design, an actuator-aware kino-dynamic motion planner, and a landing controller as part of a practical system design for highly dynamic motion control of the humanoid robot.” So it’s not just the robot itself, but all of the software infrastructure necessary to get it to do what they want it to do.

MIT Humanoid performing a back flip off of a humanoid robot off of a 0.4 m platform in simulation.
Image: MIT

First let’s talk about the hardware that we’ll be looking at once the MIT Humanoid makes it out of simulation. It’s got the appearance of a sort of upright version of Mini Cheetah, but that appearance is deceiving, says MIT’s Matt Chignoli. While the robot’s torso and arms are very similar to Mini Cheetah, the leg design is totally new and features redesigned actuators with higher power and better torque density. “The main focus of the leg design is to enable smooth but dynamic ‘heel-to-toe’ actions that happen in humans’ walking and running, while maintaining low inertia for smooth interactions with ground contacts,” Chignoli told us in an email. “Dynamic ankle actions have been rare in humanoid robots. We hope to develop robust, low inertia and powerful legs that can mimic human leg actions.”

The design strategy matters because the field of humanoid robots is presently dominated by hydraulically actuated robots and robots with series elastic actuators. As we continue to improve the performance of our proprioceptive actuator technology, as we have done for this work, we aim to demonstrate that our unique combination of high torque density, high bandwidth force control, and the ability to mitigate impacts is optimal for highly dynamic locomotion of any legged robot, including humanoids.

-Matt Chignoli

Now, it’s easy to say “oh well pfft that’s just in simulation and you can get anything to work in simulation,” which, yeah, that’s kinda true. But MIT is putting a lot of work into accurately simulating everything that they possibly can—in particular, they’re modeling the detailed physical constraints that the robot operates under as it performs dynamic motions, allowing the planner to take those constraints into account and (hopefully) resulting in motions that match the simulation pretty accurately.

“When it comes to the physical capabilities of the robot, anything we demonstrate in simulation should be feasible on the robot,” Chignoli says. “We include in our simulations detailed models for the robot’s actuators and battery, models that have been validated experimentally. Such detailed models are not frequently included in dynamic simulations for robots.” But simulation is still simulation, of course, and no matter how good your modeling is, that transfer can be tricky, especially when doing highly dynamic motions.

“Despite our confidence in our simulator’s ability to accurately mimic the physical capabilities of our robot with high fidelity, there are aspects of our simulator that remain uncertain as we aim to deploy our acrobatic motions onto hardware,” Chignoli explains. “The main difficulty we see is state estimation. We have been drawing upon research related to state estimation for drones, which makes use of visual odometry. Without having an assembled robot to test these new estimation strategies on, though, it is difficult to judge the simulation to real transfer for these types of things.”

We’re told that the design of the MIT Humanoid is complete, and that the plan is to build it for real over the summer, with the eventual goal of doing parkour over challenging terrains. It’s tempting to fixate on the whole acrobatics and parkour angle of things (and we’re totally looking forward to some awesome videos), but according to Chignoli, the really important contribution here is the framework rather than the robot itself:

The acrobatic motions that we demonstrate on our small-scale humanoid are less about the actual acrobatics and more about what the ability to perform such feats implies for both our hardware as well as our control framework. The motions are important in terms of the robot’s capabilities because we are proving, at least in simulation, that we can replicate the dynamic feats of Boston Dynamics’ ATLAS robot using an entirely different actuation scheme (proprioceptive electromagnetic motors vs. hydraulic actuators, respectively). Verification that proprioceptive actuators can achieve the necessary torque density to perform such motions while retaining the advantages of low mechanical impedance and high-bandwidth torque control is important as people consider how to design the next generation of dynamic humanoid robots. Furthermore, the acrobatic motions demonstrate the ability of our “actuator-aware” motion planner to generate feasible motion plans that push the boundaries of what our robot can do.

The MIT Humanoid Robot: Design, Motion Planning, and Control For Acrobatic Behaviors, by Matthew Chignoli, Donghyun Kim, Elijah Stanger-Jones, and Sangbae Kim from MIT and UMass Amherst, will be presented at Humanoids 2020 this July. You can read a preprint on arXiv here. Continue reading

Posted in Human Robots

#439286 MIT is Building a Dynamic, Acrobatic ...

For a long time, having a bipedal robot that could walk on a flat surface without falling over (and that could also maybe occasionally climb stairs or something) was a really big deal. But we’re more or less past that now. Thanks to the talented folks at companies like Agility Robotics and Boston Dynamics, we now expect bipedal robots to meet or exceed actual human performance for at least a small subset of dynamic tasks. The next step seems to be to find ways of pushing the limits of human performance, which it turns out means acrobatics. We know that IHMC has been developing their own child-size acrobatic humanoid named Nadia, and now it sounds like researchers from Sangbae Kim’s lab at MIT are working on a new acrobatic robot of their own.

We’ve seen a variety of legged robots from MIT’s Biomimetic Robotics Lab, including Cheetah and HERMES. Recently, they’ve been doing a bunch of work with their spunky little Mini Cheetahs (developed with funding and support from Naver Labs), which are designed for some dynamic stuff like gait exploration and some low-key four-legged acrobatics.

In a paper recently posted to arXiv (to be presented at Humanoids 2020 in July), Matthew Chignoli, Donghyun Kim, Elijah Stanger-Jones, and Sangbae Kim describe “a new humanoid robot design, an actuator-aware kino-dynamic motion planner, and a landing controller as part of a practical system design for highly dynamic motion control of the humanoid robot.” So it’s not just the robot itself, but all of the software infrastructure necessary to get it to do what they want it to do.

Image: MIT

MIT Humanoid performing a back flip off of a humanoid robot off of a 0.4 m platform in simulation.

First let’s talk about the hardware that we’ll be looking at once the MIT Humanoid makes it out of simulation. It’s got the appearance of a sort of upright version of Mini Cheetah, but that appearance is deceiving, says MIT’s Matt Chignoli. While the robot’s torso and arms are very similar to Mini Cheetah, the leg design is totally new and features redesigned actuators with higher power and better torque density. “The main focus of the leg design is to enable smooth but dynamic ‘heel-to-toe’ actions that happen in humans’ walking and running, while maintaining low inertia for smooth interactions with ground contacts,” Chignoli told us in an email. “Dynamic ankle actions have been rare in humanoid robots. We hope to develop robust, low inertia and powerful legs that can mimic human leg actions.”

The design strategy matters because the field of humanoid robots is presently dominated by hydraulically actuated robots and robots with series elastic actuators. As we continue to improve the performance of our proprioceptive actuator technology, as we have done for this work, we aim to demonstrate that our unique combination of high torque density, high bandwidth force control, and the ability to mitigate impacts is optimal for highly dynamic locomotion of any legged robot, including humanoids.

-Matt Chignoli

Now, it’s easy to say “oh well pfft that’s just in simulation and you can get anything to work in simulation,” which, yeah, that’s kinda true. But MIT is putting a lot of work into accurately simulating everything that they possibly can—in particular, they’re modeling the detailed physical constraints that the robot operates under as it performs dynamic motions, allowing the planner to take those constraints into account and (hopefully) resulting in motions that match the simulation pretty accurately.

“When it comes to the physical capabilities of the robot, anything we demonstrate in simulation should be feasible on the robot,” Chignoli says. “We include in our simulations detailed models for the robot’s actuators and battery, models that have been validated experimentally. Such detailed models are not frequently included in dynamic simulations for robots.” But simulation is still simulation, of course, and no matter how good your modeling is, that transfer can be tricky, especially when doing highly dynamic motions.

“Despite our confidence in our simulator’s ability to accurately mimic the physical capabilities of our robot with high fidelity, there are aspects of our simulator that remain uncertain as we aim to deploy our acrobatic motions onto hardware,” Chignoli explains. “The main difficulty we see is state estimation. We have been drawing upon research related to state estimation for drones, which makes use of visual odometry. Without having an assembled robot to test these new estimation strategies on, though, it is difficult to judge the simulation to real transfer for these types of things.”

We’re told that the design of the MIT Humanoid is complete, and that the plan is to build it for real over the summer, with the eventual goal of doing parkour over challenging terrains. It’s tempting to fixate on the whole acrobatics and parkour angle of things (and we’re totally looking forward to some awesome videos), but according to Chignoli, the really important contribution here is the framework rather than the robot itself:

The acrobatic motions that we demonstrate on our small-scale humanoid are less about the actual acrobatics and more about what the ability to perform such feats implies for both our hardware as well as our control framework. The motions are important in terms of the robot’s capabilities because we are proving, at least in simulation, that we can replicate the dynamic feats of Boston Dynamics’ ATLAS robot using an entirely different actuation scheme (proprioceptive electromagnetic motors vs. hydraulic actuators, respectively). Verification that proprioceptive actuators can achieve the necessary torque density to perform such motions while retaining the advantages of low mechanical impedance and high-bandwidth torque control is important as people consider how to design the next generation of dynamic humanoid robots. Furthermore, the acrobatic motions demonstrate the ability of our “actuator-aware” motion planner to generate feasible motion plans that push the boundaries of what our robot can do.

The MIT Humanoid Robot: Design, Motion Planning, and Control For Acrobatic Behaviors, by Matthew Chignoli, Donghyun Kim, Elijah Stanger-Jones, and Sangbae Kim from MIT and UMass Amherst, will be presented at Humanoids 2020 this July. You can read a preprint on arXiv here. Continue reading

Posted in Human Robots

#439251 Is AI the Future of Training for New ...

Everywhere you look in technology today, you find buzz about the promise of emergent technologies such as machine learning (ML) and artificial intelligence (AI). From curating the content that we watch on streaming services to finding ways to improve intense logistical processes, ML- and AI-based technologies already impact our lives in many ways. Increasingly, these …

The post Is AI the Future of Training for New Employees? appeared first on TFOT. Continue reading

Posted in Human Robots

#439237 Agility Robotics’ Cassie Is Now ...

Bipedal robots are a huge hassle. They’re expensive, complicated, fragile, and they spend most of their time almost but not quite falling over. That said, bipeds are worth it because if you want a robot to go everywhere humans go, the conventional wisdom is that the best way to do so is to make robots that can walk on two legs like most humans do. And the most frequent, most annoying two-legged thing that humans do to get places? Going up and down stairs.

Stairs have been a challenge for robots of all kinds (bipeds, quadrupeds, tracked robots, you name it) since, well, forever. And usually, when we see bipeds going up or down stairs nowadays, it involves a lot of sensing, a lot of computation, and then a fairly brittle attempt that all too often ends in tears for whoever has to put that poor biped back together again.

You’d think that the solution to bipedal stair traversal would just involve better sensing and more computation to model the stairs and carefully plan footsteps. But an approach featured in upcoming Robotics Science and Systems conference paper from Oregon State University and Agility Robotics does away will all of that out and instead just throws a Cassie biped at random outdoor stairs with absolutely no sensing at all. And it works spectacularly well.

A couple of things to bear in mind: Cassie is “blind” in the sense that it has no information about the stairs that it’s going up or down. The robot does get proprioceptive feedback, meaning that it knows what kind of contact its limbs are making with the stairs. Also, the researchers do an admirable job of keeping that safety tether slack, and Cassie isn’t being helped by it in the least—it’s just there to prevent a catastrophic fall.

What really bakes my noodle about this video is how amazing Cassie is at being kind of terrible at stair traversal. The robot is a total klutz: it runs into railings, stubs its toes, slips off of steps, misses steps completely, and occasionally goes backwards. Amazingly, Cassie still manages not only to not fall, but also to keep going until it gets where it needs to be.

And this is why this research is so exciting—rather than try to develop some kind of perfect stair traversal system that relies on high quality sensing and a lot of computation to optimally handle stairs, this approach instead embraces real-world constraints while managing to achieve efficient performance that’s real-world robust, if perhaps not the most elegant.

The secret to Cassie’s stair mastery isn’t much of a secret at all, since there’s a paper about it on arXiv. The researchers used reinforcement learning to train a simulated Cassie on permutations of stairs based on typical city building codes, with sets of stairs up to eight individual steps. To transfer the learned stair-climbing strategies (referred to as policies) effectively from simulation to the real world, the simulation included a variety of disturbances designed to represent the kinds of things that are hard to simulate accurately. For example, Cassie had its simulated joints messed with, its simulated processing speed tweaked, and even the simulated ground friction was jittered around. So, even though the simulation couldn’t perfectly mimic real ground friction, randomly mixing things up ensures that the controller (the software telling the robot how to move) gains robustness to a much wider range of situations.

One peculiarity of using reinforcement learning to train a robot is that even if you come up with something that works really well, it’s sometimes unclear exactly why. You may have noticed in the first video that the researchers are only able to hypothesize about the reasons for the controller’s success, and we asked one of the authors, Kevin Green, to try and explain what’s going on:

“Deep reinforcement learning has similar issues that we are seeing in a lot of machine learning applications. It is hard to understand the reasoning for why a learned controller performs certain actions. Is it exploiting a quirk of your simulation or your reward function? Is it perhaps stuck in a local minima? Sometimes the reward function is not specific enough and the policy can exhibit strange, vestigial behaviors simply because they are not rewarded or penalized. On the other hand, a reward function can be too constraining and can lead to a policy which doesn’t fully explore the space of possible actions, limiting performance. We do our best to ensure our simulation is accurate and that our rewards are objective and descriptive. From there, we really act more like biomechanists, observing a functioning system for hints as to the strategies that it is using to be highly successful.”

One of the strategies that they observed, first author Jonah Siekmann told us, is that Cassie does better on stairs when it’s moving faster, which is a bit of a counterintuitive thing for robots generally:

“Because the robot is blind, it can choose very bad foot placements. If it tries to place its foot on the very corner of a stair and shift its weight to that foot, the resulting force pushes the robot back down the stairs. At walking speed, this isn’t much of an issue because the robot’s momentum can overcome brief moments where it is being pushed backwards. At low speeds, the momentum is not sufficient to overcome a bad foot placement, and it will keep getting knocked backwards down the stairs until it falls. At high speeds, the robot tends to skip steps which pushes the robot closer to (and sometimes over) its limits.”

These bad foot placements are what lead to some of Cassie’s more impressive feats, Siekmann says. “Some of the gnarlier descents, where Cassie skips a step or three and recovers, were especially surprising. The robot also tripped on ascent and recovered in one step a few times. The physics are complicated, so to see those accurate reactions embedded in the learned controller was exciting. We haven’t really seen that kind of robustness before.” In case you’re worried that all of that robustness is in video editing, here’s an uninterrupted video of ten stair ascents and ten stair descents, featuring plenty of gnarliness.

We asked the researchers whether Cassie is better at stairs than a blindfolded human would be. “It’s difficult to say,” Siekmann told us. “We’ve joked lots of times that Cassie is superhuman at stair climbing because in the process of filming these videos we have tripped going up the stairs ourselves while we’re focusing on the robot or on holding a camera.”

A robot being better than a human at a dynamic task like this is obviously a very high bar, but my guess is that most of us humans are actually less prepared for blind stair navigation than Cassie is, because Cassie was explicitly trained on stairs that were uneven: “a small amount of noise (± 1cm) is added to the rise and run of each step such that the stairs are never entirely uniform, to prevent the policy from deducing the precise dimensions of the stairs via proprioception and subsequently overfitting to perfectly uniform stairs.” Speaking as someone who just tried jogging up my stairs with my eyes closed in the name of science, I absolutely relied on the assumption that my stairs were uniform. And when humans can’t rely on assumptions like that, it screws us up, even if we have eyeballs equipped.

Like most robot-y things, Cassie is operating under some significant constraints here. If Cassie seems even stompier than it usually is, that’s because it’s using this specific stair controller which is optimized for stairs and stair-like things but not much else.

“When you train neural networks to act as controllers, over time the learning algorithm refines the network so that it maximizes the reward specific to the environment that it sees,” explains Green. “This means that by training on flights of stairs, we get a very different looking controller compared to training on flat ground.” Green says that the stair controller works fine on flat ground, it’s just less efficient (and noisier). They’re working on ways of integrating multiple gait controllers that the robot can call on depending on what it’s trying to do; conceivably this might involve some very simple perception system just to tell the robot “hey look, there are some stairs, better engage stair mode.”

The paper ends with the statement that “this work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie.” I’m certainly surprised at Cassie’s stair capabilities, and it’ll be exciting to see what other environments this technique can be applied to. If there are limits, I’m sure that Cassie is going to try and find them.

Blind Bipedal Stair Traversal via Sim-to-Real Reinforcement Learning, by Jonah Siekmann, Kevin Green, John Warila, Alan Fern, and Jonathan Hurst from Oregon State University and Agility Robotics, will be presented at RSS 2021 in July. Continue reading

Posted in Human Robots