Tag Archives: parkour

#439618 Q&A: Boston Dynamics on Atlas’s ...

Yesterday's video from Boston Dynamics showing a pair of Atlas robots doing parkour together is already up to nearly 3 million views, and for good reason. The company continues to push forward the state of the art for dynamic bipedal robots, now by mixing in perception as well as upper-body maneuvers that humanoid robots find particularly challenging. A behind-the-scenes video and blog post provided an uncharacteristic amount of detail about the process that Boston Dynamics goes through to make videos like these, but we still had questions. And happily, Boston Dynamics had answers!

Here's the new Atlas parkour video, if you missed our post yesterday:

For more details from the experts, we spoke with Scott Kuindersma, the Atlas team lead at Boston Dynamics, and Ben Stephens, the Atlas controls lead, via email.

IEEE Spectrum: Can you describe some of the constraints that Atlas is operating under, and how brittle its behaviors are? For example, can it handle changes in friction, and can it adapt autonomously if different sequences of movements are required?

Scott Kuindersma and Ben Stephens: The ability to adapt behaviors to a range of circumstances is a key design principle for Atlas, so for an activity like parkour, we frequently test the robot by making changes to the geometry of the course. Atlas is also able to deal with things like feet sliding to some extent. We run subsets of these behaviors on wood, mats, asphalt, grass, and surfaces with grip texture without explicitly telling the robot that the friction and ground compliances are different. But there are of course limits—parkour on ice probably wouldn't work. (Spot, which is used in a wide range of commercial environments, has more explicit mechanisms for detecting slip events and automatically changing its control response to cope with different types of surfaces).

Atlas' control system also provides some flexibility in reordering move sequences, whether these sequences are provided ahead of time (as was the case here) or if they are generated online as the output of a planning process. The idea behind Atlas' behavior libraries is that they can be reused in new environments.

Spectrum: It's very impressive to see Atlas using more upper body for dynamic maneuvers. To what extent will Atlas continue to use human-ish motion for dynamic mobility, as opposed to motions that could be more optimized for unique robotic capabilities?

Kuindersma and Stephens: We're interested in creating behaviors that take full advantage of the hardware even if the resulting motion is not perfectly humanlike. That said, the incredible breadth and quality of human motion remains a source of inspiration for us, particularly in cases like parkour where the coordination and athleticism on display motivates useful hardware and software innovation.

Spectrum: You mentioned in your blog post that the robot has no spine or shoulder blades, which places some limitations on what it can do. After several iterations of Atlas, how much bioinspired design do you think is the right amount?

Kuindersma and Stephens: When building robots like Atlas, there's always a long list of engineering tradeoffs that shape the final design. The current robot has evolved over several generations of humanoids at Boston Dynamics and represents a good tradeoff between size, range of motion, and strength-to-weight ratio. When our work identifies physical limits of the machine, that becomes useful information to our design team. In some cases, limitations can be improved through incremental upgrades. But for new robot designs, we have to make strategic decisions about how the limitations of the current machine conflict with what we want the robot to do over the next few years. These decisions are primarily motivated by our technical goals and experimental analyses and less so by human performance data.

Finding and operating at the limits of the robot hardware is part of the motivation for doing things like parkour.

Spectrum: Last we heard, Atlas was not using machine learning in these contexts. When you're teaching Atlas new behaviors, how exactly do you do that?

Kuindersma and Stephens: The behaviors Atlas performs during parkour can be expressed as optimization problems that compute strategies for coordinating forces and motion over time. We use optimization both to design the behaviors in Atlas' library offline and to adapt and execute them online. This programming strategy works well when you can describe what you want as a tractable optimization problem, but not all tasks are like that. For example, machine learning becomes an essential tool for programming behavior in cases where detailed solutions are hard to write down (e.g., vision-dominant manipulation tasks). We're excited about opportunities to solve problems by leveraging the strengths of both approaches going forward.

Spectrum: At this point, is Atlas more constrained by hardware or software? If you want Atlas to do something new, what draws the line between impossible and not?

Kuindersma and Stephens: Finding and operating at the limits of the robot hardware is part of the motivation for doing things like parkour. But if we consider a longer term vision for what we want robots like Atlas to do, there is a lot of opportunity for software innovation using the existing hardware. We will continue to improve on both fronts. Over the past seven years, Atlas' behavior has evolved from walking up stairs and moving cardboard boxes to the running, flipping, and dancing you see today. We're excited to see where the next seven years will take us. Continue reading

Posted in Human Robots

#439614 Watch Boston Dynamics’ Atlas Robot ...

At the end of 2020, Boston Dynamics released a spirits-lifting, can’t-watch-without-smiling video of its robots doing a coordinated dance routine. Atlas, Spot, and Handle had some pretty sweet moves, though if we’re being honest, Atlas was the one (or, in this case, two) that really stole the show.

A new video released yesterday has the bipedal humanoid robot stealing the show again, albeit in a way that probably won’t make you giggle as much. Two Atlases navigate a parkour course, complete with leaping onto and between boxes of different heights, shimmying down a balance beam, and throwing synchronized back flips.

The big question that may be on many viewers’ minds is whether the robots are truly navigating the course on their own—making real-time decisions about how high to jump or how far to extend a foot—or if they’re pre-programmed to execute each motion according to a detailed map of the course.

As engineers explain in a second new video and accompanying blog post, it’s a combination of both.

Atlas is equipped with RGB cameras and depth sensors to give it “vision,” providing input to its control system, which is run on three computers. In the dance video linked above and previous videos of Atlas doing parkour, the robot wasn’t sensing its environment and adapting its movements accordingly (though it did make in-the-moment adjustments to keep its balance).

But in the new routine, the Boston Dynamics team says, they created template behaviors for Atlas. The robot can match these templates to its environment, adapting its motions based on what’s in front of it. The engineers had to find a balance between “long-term” goals for the robot—i.e., making it through the whole course—and “short-term” goals, like adjusting its footsteps and posture to keep from keeling over. The motions were refined through both computer simulations and robot testing.

“Our control team has to create algorithms that can reason about the physical complexity of these machines to create a broad set of high energy and coordinated behavior,” said Atlas team lead Scott Kuindersma. “It’s really about creating behaviors at the limits of the robot’s capabilities and getting them all to work together in a flexible control system.”

The limits of the robot’s capabilities were frequently reached while practicing the new parkour course, and getting a flawless recording took many tries. The explainer video includes bloopers of Atlas falling flat on its face—not to mention on its head, stomach, and back, as it under-rotates for flips, crosses its feet while running, and miscalculates the distance it needs to cover on jumps.

I know it’s a robot, but you can’t help feeling sort of bad for it, especially when its feet miss the platform (by a lot) on a jump and its whole upper body comes crashing onto said platform, while its legs dangle toward the ground, in a move that would severely injure a human (and makes you wonder if Atlas survived with its hardware intact).

Ultimately, Atlas is a research and development tool, not a product the company plans to sell commercially (which is probably good, because despite how cool it looks doing parkour, I for one would be more than a little wary if I came across this human-shaped hunk of electronics wandering around in public).

“I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Kuindersma said. “But we’re still in the early days of creating that future.”

Image Credit: Boston Dynamics Continue reading

Posted in Human Robots

#439608 Atlas Shows Most Impressive Parkour ...

Boston Dynamics has just posted a couple of new videos showing their Atlas humanoid robot doing some of the most impressive parkour we've yet seen. Let's watch!

Parkour is the perfect sandbox for the Atlas team at Boston Dynamics to experiment with new behaviors. In this video our humanoid robots demonstrate their whole-body athletics, maintaining its balance through a variety of rapidly changing, high-energy activities. Through jumps, balance beams, and vaults, we demonstrate how we push Atlas to its limits to discover the next generation of mobility, perception, and athletic intelligence.There are a couple of new and exciting things in this video. First, Atlas is doing some serious work with its upper body by vaulting over that bar. It's not supporting its entire weight with one arm, since it's jumping, but it's doing what looks like some fairly complex balancing and weight management using all four of its limbs at once. Most of what we've seen from Atlas up to this point has been lower body focused, and while the robot has used its arms for forward rolls and stuff, those moves have been simpler than what we're seeing here. Aaron Saunders, Boston Dynamics' VP of Engineering, suggested to us earlier this year that the Atlas team would be working on more upper-body stuff, it looks like they're now delivering. We're expecting that Atlas will continue to improve in this direction, and that at some point it'll be able to do the equivalent of a pull-up, which will open up a much wider variety of behaviors.
The second big new thing is that Atlas is now leveraging perception much more heavily, according to Scott Kuindersma, the Atlas team lead at Boston Dynamics, who wrote about it in a blog post:
“Atlas's moves are driven by perception now, and they weren't back then,” Kuindersma explains. “For example, the previous floor routine and dance videos were about capturing our ability to create a variety of dynamic moves and chain them together into a routine that we could run over and over again. In that case, the robot's control system still has to make lots of critical adjustments on the fly to maintain balance and posture goals, but the robot was not sensing and reacting to its environment.”
In this iteration of parkour, the robot is adapting behaviors in its repertoire based on what it sees. This means the engineers don't need to pre-program jumping motions for all possible platforms and gaps the robot might encounter. Instead, the team creates a smaller number of template behaviors that can be matched to the environment and executed online.This is a pretty big deal. Without perception, Atlas was running its routines blind—as long as the environment was kept more or less completely static, the robot would do okay, but obviously that's a major limitation. What Atlas is doing in this new video is still somewhat limited, in the sense that it's still relying on template behaviors created by humans rather than doing true dynamic planning, but this represents a lot of progress.
One other thing that's worth paying attention to is how Boston Dynamics thinks of humanoid robots:
“Humanoids are interesting from a couple perspectives,” Kuindersma says. “First, they capture our vision of a go-anywhere, do-anything robot of the future. They may not be the best design for any particular task, but if you wanted to build one platform that could perform a wide variety of physical tasks, we already know that a human form factor is capable of doing that.”This tends to be the justification for humanoid robots, along with the idea that you need a humanoid form factor to operate in human environments. But Kuindersma is absolutely right when he says that humanoids may not be the best design for any particular task, and at least in the near term, practical commercial robots tend not to be generalists. Even Boston Dynamic's dog-like robot Spot, with its capable legged mobility, is suited primarily to a narrow range of specific tasks—it's great for situations where legs are necessary, but otherwise it's complex and expensive and wheels often do better. I think it's very important that Boston Dynamics is working towards a go-anywhere, do-anything robot, but it's also important to keep expectations in check, and remember that even robots like Atlas are (I would argue) a decade or more away from this generalist vision.
Meanwhile, Boston Dynamics seems, for better or worse, to be moving away from their habit of surprise posting crazy robot videos with zero explanation. Along with the new parkour video, Boston Dynamics has put together a second behind the scenes video:

Can I just say that I love how absolutely trashed the skins on these robots look? That's how you know good work is getting done.

There's a bunch more detail in this blog post, and we sent Boston Dynamics a couple of questions, too. We'll update this post when we hear back later today. Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots

#437940 How Boston Dynamics Taught Its Robots to ...

A week ago, Boston Dynamics posted a video of Atlas, Spot, and Handle dancing to “Do You Love Me.” It was, according to the video description, a way “to celebrate the start of what we hope will be a happier year.” As of today the video has been viewed nearly 24 million times, and the popularity is no surprise, considering the compelling mix of technical prowess and creativity on display.

Strictly speaking, the stuff going on in the video isn’t groundbreaking, in the sense that we’re not seeing any of the robots demonstrate fundamentally new capabilities, but that shouldn’t take away from how impressive it is—you’re seeing state-of-the-art in humanoid robotics, quadrupedal robotics, and whatever-the-heck-Handle-is robotics.

What is unique about this video from Boston Dynamics is the artistic component. We know that Atlas can do some practical tasks, and we know it can do some gymnastics and some parkour, but dancing is certainly something new. To learn more about what it took to make these dancing robots happen (and it’s much more complicated than it might seem), we spoke with Aaron Saunders, Boston Dynamics’ VP of Engineering.

Saunders started at Boston Dynamics in 2003, meaning that he’s been a fundamental part of a huge number of Boston Dynamics’ robots, even the ones you may have forgotten about. Remember LittleDog, for example? A team of two designed and built that adorable little quadruped, and Saunders was one of them.

While he’s been part of the Atlas project since the beginning (and had a hand in just about everything else that Boston Dynamics works on), Saunders has spent the last few years leading the Atlas team specifically, and he was kind enough to answer our questions about their dancing robots.

IEEE Spectrum: What’s your sense of how the Internet has been reacting to the video?

Aaron Saunders: We have different expectations for the videos that we make; this one was definitely anchored in fun for us. The response on YouTube was record-setting for us: We received hundreds of emails and calls with people expressing their enthusiasm, and also sharing their ideas for what we should do next, what about this song, what about this dance move, so that was really fun. My favorite reaction was one that I got from my 94-year-old grandma, who watched the video on YouTube and then sent a message through the family asking if I’d taught the robot those sweet moves. I think this video connected with a broader audience, because it mixed the old-school music with new technology.

We haven’t seen Atlas move like this before—can you talk about how you made it happen?

We started by working with dancers and a choreographer to create an initial concept for the dance by composing and assembling a routine. One of the challenges, and probably the core challenge for Atlas in particular, was adjusting human dance moves so that they could be performed on the robot. To do that, we used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go “that would be easy” or “that would be hard” or “that scares me.” And then we’d have a discussion, try different things in simulation, and make adjustments to find a compatible set of moves that we could execute on Atlas.

Throughout the project, the time frame for creating those new dance moves got shorter and shorter as we built tools, and as an example, eventually we were able to use that toolchain to create one of Atlas’ ballet moves in just one day, the day before we filmed, and it worked. So it’s not hand-scripted or hand-coded, it’s about having a pipeline that lets you take a diverse set of motions, that you can describe through a variety of different inputs, and push them through and onto the robot.

Image: Boston Dynamics

Were there some things that were particularly difficult to translate from human dancers to Atlas? Or, things that Atlas could do better than humans?

Some of the spinning turns in the ballet parts took more iterations to get to work, because they were the furthest from leaping and running and some of the other things that we have more experience with, so they challenged both the machine and the software in new ways. We definitely learned not to underestimate how flexible and strong dancers are—when you take elite athletes and you try to do what they do but with a robot, it’s a hard problem. It’s humbling. Fundamentally, I don’t think that Atlas has the range of motion or power that these athletes do, although we continue developing our robots towards that, because we believe that in order to broadly deploy these kinds of robots commercially, and eventually in a home, we think they need to have this level of performance.

One thing that robots are really good at is doing something over and over again the exact same way. So once we dialed in what we wanted to do, the robots could just do it again and again as we played with different camera angles.

I can understand how you could use human dancers to help you put together a routine with Atlas, but how did that work with Spot, and particularly with Handle?

I think the people we worked with actually had a lot of talent for thinking about motion, and thinking about how to express themselves through motion. And our robots do motion really well—they’re dynamic, they’re exciting, they balance. So I think what we found was that the dancers connected with the way the robots moved, and then shaped that into a story, and it didn’t matter whether there were two legs or four legs. When you don’t necessarily have a template of animal motion or human behavior, you just have to think a little harder about how to go about doing something, and that’s true for more pragmatic commercial behaviors as well.

“We used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go ‘that would be easy’ or ‘that would be hard’ or ‘that scares me.’”
—Aaron Saunders, Boston Dynamics

How does the experience that you get teaching robots to dance, or to do gymnastics or parkour, inform your approach to robotics for commercial applications?

We think that the skills inherent in dance and parkour, like agility, balance, and perception, are fundamental to a wide variety of robot applications. Maybe more importantly, finding that intersection between building a new robot capability and having fun has been Boston Dynamics’ recipe for robotics—it’s a great way to advance.

One good example is how when you push limits by asking your robots to do these dynamic motions over a period of several days, you learn a lot about the robustness of your hardware. Spot, through its productization, has become incredibly robust, and required almost no maintenance—it could just dance all day long once you taught it to. And the reason it’s so robust today is because of all those lessons we learned from previous things that may have just seemed weird and fun. You’ve got to go into uncharted territory to even know what you don’t know.

Image: Boston Dynamics

It’s often hard to tell from watching videos like these how much time it took to make things work the way you wanted them to, and how representative they are of the actual capabilities of the robots. Can you talk about that?

Let me try to answer in the context of this video, but I think the same is true for all of the videos that we post. We work hard to make something, and once it works, it works. For Atlas, most of the robot control existed from our previous work, like the work that we’ve done on parkour, which sent us down a path of using model predictive controllers that account for dynamics and balance. We used those to run on the robot a set of dance steps that we’d designed offline with the dancers and choreographer. So, a lot of time, months, we spent thinking about the dance and composing the motions and iterating in simulation.

Dancing required a lot of strength and speed, so we even upgraded some of Atlas’ hardware to give it more power. Dance might be the highest power thing we’ve done to date—even though you might think parkour looks way more explosive, the amount of motion and speed that you have in dance is incredible. That also took a lot of time over the course of months; creating the capability in the machine to go along with the capability in the algorithms.

Once we had the final sequence that you see in the video, we only filmed for two days. Much of that time was spent figuring out how to move the camera through a scene with a bunch of robots in it to capture one continuous two-minute shot, and while we ran and filmed the dance routine multiple times, we could repeat it quite reliably. There was no cutting or splicing in that opening two-minute shot.

There were definitely some failures in the hardware that required maintenance, and our robots stumbled and fell down sometimes. These behaviors are not meant to be productized and to be a 100 percent reliable, but they’re definitely repeatable. We try to be honest with showing things that we can do, not a snippet of something that we did once. I think there’s an honesty required in saying that you’ve achieved something, and that’s definitely important for us.

You mentioned that Spot is now robust enough to dance all day. How about Atlas? If you kept on replacing its batteries, could it dance all day, too?

Atlas, as a machine, is still, you know… there are only a handful of them in the world, they’re complicated, and reliability was not a main focus. We would definitely break the robot from time to time. But the robustness of the hardware, in the context of what we were trying to do, was really great. And without that robustness, we wouldn’t have been able to make the video at all. I think Atlas is a little more like a helicopter, where there’s a higher ratio between the time you spend doing maintenance and the time you spend operating. Whereas with Spot, the expectation is that it’s more like a car, where you can run it for a long time before you have to touch it.

When you’re teaching Atlas to do new things, is it using any kind of machine learning? And if not, why not?

As a company, we’ve explored a lot of things, but Atlas is not using a learning controller right now. I expect that a day will come when we will. Atlas’ current dance performance uses a mixture of what we like to call reflexive control, which is a combination of reacting to forces, online and offline trajectory optimization, and model predictive control. We leverage these techniques because they’re a reliable way of unlocking really high performance stuff, and we understand how to wield these tools really well. We haven’t found the end of the road in terms of what we can do with them.

We plan on using learning to extend and build on the foundation of software and hardware that we’ve developed, but I think that we, along with the community, are still trying to figure out where the right places to apply these tools are. I think you’ll see that as part of our natural progression.

Image: Boston Dynamics

Much of Atlas’ dynamic motion comes from its lower body at the moment, but parkour makes use of upper body strength and agility as well, and we’ve seen some recent concept images showing Atlas doing vaults and pullups. Can you tell us more?

Humans and animals do amazing things using their legs, but they do even more amazing things when they use their whole bodies. I think parkour provides a fantastic framework that allows us to progress towards whole body mobility. Walking and running was just the start of that journey. We’re progressing through more complex dynamic behaviors like jumping and spinning, that’s what we’ve been working on for the last couple of years. And the next step is to explore how using arms to push and pull on the world could extend that agility.

One of the missions that I’ve given to the Atlas team is to start working on leveraging the arms as much as we leverage the legs to enhance and extend our mobility, and I’m really excited about what we’re going to be working on over the next couple of years, because it’s going to open up a lot more opportunities for us to do exciting stuff with Atlas.

What’s your perspective on hydraulic versus electric actuators for highly dynamic robots?

Across my career at Boston Dynamics, I’ve felt passionately connected to so many different types of technology, but I’ve settled into a place where I really don’t think this is an either-or conversation anymore. I think the selection of actuator technology really depends on the size of the robot that you’re building, what you want that robot to do, where you want it to go, and many other factors. Ultimately, it’s good to have both kinds of actuators in your toolbox, and I love having access to both—and we’ve used both with great success to make really impressive dynamic machines.

I think the only delineation between hydraulic and electric actuators that appears to be distinct for me is probably in scale. It’s really challenging to make tiny hydraulic things because the industry just doesn’t do a lot of that, and the reciprocal is that the industry also doesn’t tend to make massive electrical things. So, you may find that to be a natural division between these two technologies.

Besides what you’re working on at Boston Dynamics, what recent robotics research are you most excited about?

For us as a company, we really love to follow advances in sensing, computer vision, terrain perception, these are all things where the better they get, the more we can do. For me personally, one of the things I like to follow is manipulation research, and in particular manipulation research that advances our understanding of complex, friction-based interactions like sliding and pushing, or moving compliant things like ropes.

We’re seeing a shift from just pinching things, lifting them, moving them, and dropping them, to much more meaningful interactions with the environment. Research in that type of manipulation I think is going to unlock the potential for mobile manipulators, and I think it’s really going to open up the ability for robots to interact with the world in a rich way.

Is there anything else you’d like people to take away from this video?

For me personally, and I think it’s because I spend so much of my time immersed in robotics and have a deep appreciation for what a robot is and what its capabilities and limitations are, one of my strong desires is for more people to spend more time with robots. We see a lot of opinions and ideas from people looking at our videos on YouTube, and it seems to me that if more people had opportunities to think about and learn about and spend time with robots, that new level of understanding could help them imagine new ways in which robots could be useful in our daily lives. I think the possibilities are really exciting, and I just want more people to be able to take that journey. Continue reading

Posted in Human Robots