Tag Archives: parkour
#439618 Q&A: Boston Dynamics on Atlas’s ...
Yesterday's video from Boston Dynamics showing a pair of Atlas robots doing parkour together is already up to nearly 3 million views, and for good reason. The company continues to push forward the state of the art for dynamic bipedal robots, now by mixing in perception as well as upper-body maneuvers that humanoid robots find particularly challenging. A behind-the-scenes video and blog post provided an uncharacteristic amount of detail about the process that Boston Dynamics goes through to make videos like these, but we still had questions. And happily, Boston Dynamics had answers!
Here's the new Atlas parkour video, if you missed our post yesterday:
For more details from the experts, we spoke with Scott Kuindersma, the Atlas team lead at Boston Dynamics, and Ben Stephens, the Atlas controls lead, via email.
IEEE Spectrum: Can you describe some of the constraints that Atlas is operating under, and how brittle its behaviors are? For example, can it handle changes in friction, and can it adapt autonomously if different sequences of movements are required?
Scott Kuindersma and Ben Stephens: The ability to adapt behaviors to a range of circumstances is a key design principle for Atlas, so for an activity like parkour, we frequently test the robot by making changes to the geometry of the course. Atlas is also able to deal with things like feet sliding to some extent. We run subsets of these behaviors on wood, mats, asphalt, grass, and surfaces with grip texture without explicitly telling the robot that the friction and ground compliances are different. But there are of course limits—parkour on ice probably wouldn't work. (Spot, which is used in a wide range of commercial environments, has more explicit mechanisms for detecting slip events and automatically changing its control response to cope with different types of surfaces).
Atlas' control system also provides some flexibility in reordering move sequences, whether these sequences are provided ahead of time (as was the case here) or if they are generated online as the output of a planning process. The idea behind Atlas' behavior libraries is that they can be reused in new environments.
Spectrum: It's very impressive to see Atlas using more upper body for dynamic maneuvers. To what extent will Atlas continue to use human-ish motion for dynamic mobility, as opposed to motions that could be more optimized for unique robotic capabilities?
Kuindersma and Stephens: We're interested in creating behaviors that take full advantage of the hardware even if the resulting motion is not perfectly humanlike. That said, the incredible breadth and quality of human motion remains a source of inspiration for us, particularly in cases like parkour where the coordination and athleticism on display motivates useful hardware and software innovation.
Spectrum: You mentioned in your blog post that the robot has no spine or shoulder blades, which places some limitations on what it can do. After several iterations of Atlas, how much bioinspired design do you think is the right amount?
Kuindersma and Stephens: When building robots like Atlas, there's always a long list of engineering tradeoffs that shape the final design. The current robot has evolved over several generations of humanoids at Boston Dynamics and represents a good tradeoff between size, range of motion, and strength-to-weight ratio. When our work identifies physical limits of the machine, that becomes useful information to our design team. In some cases, limitations can be improved through incremental upgrades. But for new robot designs, we have to make strategic decisions about how the limitations of the current machine conflict with what we want the robot to do over the next few years. These decisions are primarily motivated by our technical goals and experimental analyses and less so by human performance data.
Finding and operating at the limits of the robot hardware is part of the motivation for doing things like parkour.
Spectrum: Last we heard, Atlas was not using machine learning in these contexts. When you're teaching Atlas new behaviors, how exactly do you do that?
Kuindersma and Stephens: The behaviors Atlas performs during parkour can be expressed as optimization problems that compute strategies for coordinating forces and motion over time. We use optimization both to design the behaviors in Atlas' library offline and to adapt and execute them online. This programming strategy works well when you can describe what you want as a tractable optimization problem, but not all tasks are like that. For example, machine learning becomes an essential tool for programming behavior in cases where detailed solutions are hard to write down (e.g., vision-dominant manipulation tasks). We're excited about opportunities to solve problems by leveraging the strengths of both approaches going forward.
Spectrum: At this point, is Atlas more constrained by hardware or software? If you want Atlas to do something new, what draws the line between impossible and not?
Kuindersma and Stephens: Finding and operating at the limits of the robot hardware is part of the motivation for doing things like parkour. But if we consider a longer term vision for what we want robots like Atlas to do, there is a lot of opportunity for software innovation using the existing hardware. We will continue to improve on both fronts. Over the past seven years, Atlas' behavior has evolved from walking up stairs and moving cardboard boxes to the running, flipping, and dancing you see today. We're excited to see where the next seven years will take us. Continue reading
#439614 Watch Boston Dynamics’ Atlas Robot ...
At the end of 2020, Boston Dynamics released a spirits-lifting, can’t-watch-without-smiling video of its robots doing a coordinated dance routine. Atlas, Spot, and Handle had some pretty sweet moves, though if we’re being honest, Atlas was the one (or, in this case, two) that really stole the show.
A new video released yesterday has the bipedal humanoid robot stealing the show again, albeit in a way that probably won’t make you giggle as much. Two Atlases navigate a parkour course, complete with leaping onto and between boxes of different heights, shimmying down a balance beam, and throwing synchronized back flips.
The big question that may be on many viewers’ minds is whether the robots are truly navigating the course on their own—making real-time decisions about how high to jump or how far to extend a foot—or if they’re pre-programmed to execute each motion according to a detailed map of the course.
As engineers explain in a second new video and accompanying blog post, it’s a combination of both.
Atlas is equipped with RGB cameras and depth sensors to give it “vision,” providing input to its control system, which is run on three computers. In the dance video linked above and previous videos of Atlas doing parkour, the robot wasn’t sensing its environment and adapting its movements accordingly (though it did make in-the-moment adjustments to keep its balance).
But in the new routine, the Boston Dynamics team says, they created template behaviors for Atlas. The robot can match these templates to its environment, adapting its motions based on what’s in front of it. The engineers had to find a balance between “long-term” goals for the robot—i.e., making it through the whole course—and “short-term” goals, like adjusting its footsteps and posture to keep from keeling over. The motions were refined through both computer simulations and robot testing.
“Our control team has to create algorithms that can reason about the physical complexity of these machines to create a broad set of high energy and coordinated behavior,” said Atlas team lead Scott Kuindersma. “It’s really about creating behaviors at the limits of the robot’s capabilities and getting them all to work together in a flexible control system.”
The limits of the robot’s capabilities were frequently reached while practicing the new parkour course, and getting a flawless recording took many tries. The explainer video includes bloopers of Atlas falling flat on its face—not to mention on its head, stomach, and back, as it under-rotates for flips, crosses its feet while running, and miscalculates the distance it needs to cover on jumps.
I know it’s a robot, but you can’t help feeling sort of bad for it, especially when its feet miss the platform (by a lot) on a jump and its whole upper body comes crashing onto said platform, while its legs dangle toward the ground, in a move that would severely injure a human (and makes you wonder if Atlas survived with its hardware intact).
Ultimately, Atlas is a research and development tool, not a product the company plans to sell commercially (which is probably good, because despite how cool it looks doing parkour, I for one would be more than a little wary if I came across this human-shaped hunk of electronics wandering around in public).
“I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Kuindersma said. “But we’re still in the early days of creating that future.”
Image Credit: Boston Dynamics Continue reading
#439608 Atlas Shows Most Impressive Parkour ...
Boston Dynamics has just posted a couple of new videos showing their Atlas humanoid robot doing some of the most impressive parkour we've yet seen. Let's watch!
Parkour is the perfect sandbox for the Atlas team at Boston Dynamics to experiment with new behaviors. In this video our humanoid robots demonstrate their whole-body athletics, maintaining its balance through a variety of rapidly changing, high-energy activities. Through jumps, balance beams, and vaults, we demonstrate how we push Atlas to its limits to discover the next generation of mobility, perception, and athletic intelligence.There are a couple of new and exciting things in this video. First, Atlas is doing some serious work with its upper body by vaulting over that bar. It's not supporting its entire weight with one arm, since it's jumping, but it's doing what looks like some fairly complex balancing and weight management using all four of its limbs at once. Most of what we've seen from Atlas up to this point has been lower body focused, and while the robot has used its arms for forward rolls and stuff, those moves have been simpler than what we're seeing here. Aaron Saunders, Boston Dynamics' VP of Engineering, suggested to us earlier this year that the Atlas team would be working on more upper-body stuff, it looks like they're now delivering. We're expecting that Atlas will continue to improve in this direction, and that at some point it'll be able to do the equivalent of a pull-up, which will open up a much wider variety of behaviors.
The second big new thing is that Atlas is now leveraging perception much more heavily, according to Scott Kuindersma, the Atlas team lead at Boston Dynamics, who wrote about it in a blog post:
“Atlas's moves are driven by perception now, and they weren't back then,” Kuindersma explains. “For example, the previous floor routine and dance videos were about capturing our ability to create a variety of dynamic moves and chain them together into a routine that we could run over and over again. In that case, the robot's control system still has to make lots of critical adjustments on the fly to maintain balance and posture goals, but the robot was not sensing and reacting to its environment.”
In this iteration of parkour, the robot is adapting behaviors in its repertoire based on what it sees. This means the engineers don't need to pre-program jumping motions for all possible platforms and gaps the robot might encounter. Instead, the team creates a smaller number of template behaviors that can be matched to the environment and executed online.This is a pretty big deal. Without perception, Atlas was running its routines blind—as long as the environment was kept more or less completely static, the robot would do okay, but obviously that's a major limitation. What Atlas is doing in this new video is still somewhat limited, in the sense that it's still relying on template behaviors created by humans rather than doing true dynamic planning, but this represents a lot of progress.
One other thing that's worth paying attention to is how Boston Dynamics thinks of humanoid robots:
“Humanoids are interesting from a couple perspectives,” Kuindersma says. “First, they capture our vision of a go-anywhere, do-anything robot of the future. They may not be the best design for any particular task, but if you wanted to build one platform that could perform a wide variety of physical tasks, we already know that a human form factor is capable of doing that.”This tends to be the justification for humanoid robots, along with the idea that you need a humanoid form factor to operate in human environments. But Kuindersma is absolutely right when he says that humanoids may not be the best design for any particular task, and at least in the near term, practical commercial robots tend not to be generalists. Even Boston Dynamic's dog-like robot Spot, with its capable legged mobility, is suited primarily to a narrow range of specific tasks—it's great for situations where legs are necessary, but otherwise it's complex and expensive and wheels often do better. I think it's very important that Boston Dynamics is working towards a go-anywhere, do-anything robot, but it's also important to keep expectations in check, and remember that even robots like Atlas are (I would argue) a decade or more away from this generalist vision.
Meanwhile, Boston Dynamics seems, for better or worse, to be moving away from their habit of surprise posting crazy robot videos with zero explanation. Along with the new parkour video, Boston Dynamics has put together a second behind the scenes video:
Can I just say that I love how absolutely trashed the skins on these robots look? That's how you know good work is getting done.
There's a bunch more detail in this blog post, and we sent Boston Dynamics a couple of questions, too. We'll update this post when we hear back later today. Continue reading
#439105 This Robot Taught Itself to Walk in a ...
Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
This likely isn’t the first robot video you’ve seen, nor the most polished.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.
Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie.
And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have been hard at work applying machine learning to robotics.
Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.
And in the meantime, Boston Dynamics bots are testing the commercial waters.
Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”
The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.
Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading