Tag Archives: great

#438080 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots

#438076 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots

#437974 China Wants to Be the World’s AI ...

China’s star has been steadily rising for decades. Besides slashing extreme poverty rates from 88 percent to under 2 percent in just 30 years, the country has become a global powerhouse in manufacturing and technology. Its pace of growth may slow due to an aging population, but China is nonetheless one of the world’s biggest players in multiple cutting-edge tech fields.

One of these fields, and perhaps the most significant, is artificial intelligence. The Chinese government announced a plan in 2017 to become the world leader in AI by 2030, and has since poured billions of dollars into AI projects and research across academia, government, and private industry. The government’s venture capital fund is investing over $30 billion in AI; the northeastern city of Tianjin budgeted $16 billion for advancing AI; and a $2 billion AI research park is being built in Beijing.

On top of these huge investments, the government and private companies in China have access to an unprecedented quantity of data, on everything from citizens’ health to their smartphone use. WeChat, a multi-functional app where people can chat, date, send payments, hail rides, read news, and more, gives the CCP full access to user data upon request; as one BBC journalist put it, WeChat “was ahead of the game on the global stage and it has found its way into all corners of people’s existence. It could deliver to the Communist Party a life map of pretty much everybody in this country, citizens and foreigners alike.” And that’s just one (albeit big) source of data.

Many believe these factors are giving China a serious leg up in AI development, even providing enough of a boost that its progress will surpass that of the US.

But there’s more to AI than data, and there’s more to progress than investing billions of dollars. Analyzing China’s potential to become a world leader in AI—or in any technology that requires consistent innovation—from multiple angles provides a more nuanced picture of its strengths and limitations. In a June 2020 article in Foreign Affairs, Oxford fellows Carl Benedikt Frey and Michael Osborne argued that China’s big advantages may not actually be that advantageous in the long run—and its limitations may be very limiting.

Moving the AI Needle
To get an idea of who’s likely to take the lead in AI, it could help to first consider how the technology will advance beyond its current state.

To put it plainly, AI is somewhat stuck at the moment. Algorithms and neural networks continue to achieve new and impressive feats—like DeepMind’s AlphaFold accurately predicting protein structures or OpenAI’s GPT-3 writing convincing articles based on short prompts—but for the most part these systems’ capabilities are still defined as narrow intelligence: completing a specific task for which the system was painstakingly trained on loads of data.

(It’s worth noting here that some have speculated OpenAI’s GPT-3 may be an exception, the first example of machine intelligence that, while not “general,” has surpassed the definition of “narrow”; the algorithm was trained to write text, but ended up being able to translate between languages, write code, autocomplete images, do math, and perform other language-related tasks it wasn’t specifically trained for. However, all of GPT-3’s capabilities are limited to skills it learned in the language domain, whether spoken, written, or programming language).

Both AlphaFold’s and GPT-3’s success was due largely to the massive datasets they were trained on; no revolutionary new training methods or architectures were involved. If all it was going to take to advance AI was a continuation or scaling-up of this paradigm—more input data yields increased capability—China could well have an advantage.

But one of the biggest hurdles AI needs to clear to advance in leaps and bounds rather than baby steps is precisely this reliance on extensive, task-specific data. Other significant challenges include the technology’s fast approach to the limits of current computing power and its immense energy consumption.

Thus, while China’s trove of data may give it an advantage now, it may not be much of a long-term foothold on the climb to AI dominance. It’s useful for building products that incorporate or rely on today’s AI, but not for pushing the needle on how artificially intelligent systems learn. WeChat data on users’ spending habits, for example, would be valuable in building an AI that helps people save money or suggests items they might want to purchase. It will enable (and already has enabled) highly tailored products that will earn their creators and the companies that use them a lot of money.

But data quantity isn’t what’s going to advance AI. As Frey and Osborne put it, “Data efficiency is the holy grail of further progress in artificial intelligence.”

To that end, research teams in academia and private industry are working on ways to make AI less data-hungry. New training methods like one-shot learning and less-than-one-shot learning have begun to emerge, along with myriad efforts to make AI that learns more like the human brain.

While not insignificant, these advancements still fall into the “baby steps” category. No one knows how AI is going to progress beyond these small steps—and that uncertainty, in Frey and Osborne’s opinion, is a major speed bump on China’s fast-track to AI dominance.

How Innovation Happens
A lot of great inventions have happened by accident, and some of the world’s most successful companies started in garages, dorm rooms, or similarly low-budget, nondescript circumstances (including Google, Facebook, Amazon, and Apple, to name a few). Innovation, the authors point out, often happens “through serendipity and recombination, as inventors and entrepreneurs interact and exchange ideas.”

Frey and Osborne argue that although China has great reserves of talent and a history of building on technologies conceived elsewhere, it doesn’t yet have a glowing track record in terms of innovation. They note that of the 100 most-cited patents from 2003 to present, none came from China. Giants Tencent, Alibaba, and Baidu are all wildly successful in the Chinese market, but they’re rooted in technologies or business models that came out of the US and were tweaked for the Chinese population.

“The most innovative societies have always been those that allowed people to pursue controversial ideas,” Frey and Osborne write. China’s heavy censorship of the internet and surveillance of citizens don’t quite encourage the pursuit of controversial ideas. The country’s social credit system rewards people who follow the rules and punishes those who step out of line. Frey adds that top-down execution of problem-solving is effective when the problem at hand is clearly defined—and the next big leaps in AI are not.

It’s debatable how strongly a culture of social conformism can impact technological innovation, and of course there can be exceptions. But a relevant historical example is the Soviet Union, which, despite heavy investment in science and technology that briefly rivaled the US in fields like nuclear energy and space exploration, ended up lagging far behind primarily due to political and cultural factors.

Similarly, China’s focus on computer science in its education system could give it an edge—but, as Frey told me in an email, “The best students are not necessarily the best researchers. Being a good researcher also requires coming up with new ideas.”

Winner Take All?
Beyond the question of whether China will achieve AI dominance is the issue of how it will use the powerful technology. Several of the ways China has already implemented AI could be considered morally questionable, from facial recognition systems used aggressively against ethnic minorities to smart glasses for policemen that can pull up information about whoever the wearer looks at.

This isn’t to say the US would use AI for purely ethical purposes. The military’s Project Maven, for example, used artificially intelligent algorithms to identify insurgent targets in Iraq and Syria, and American law enforcement agencies are also using (mostly unregulated) facial recognition systems.

It’s conceivable that “dominance” in AI won’t go to one country; each nation could meet milestones in different ways, or meet different milestones. Researchers from both countries, at least in the academic sphere, could (and likely will) continue to collaborate and share their work, as they’ve done on many projects to date.

If one country does take the lead, it will certainly see some major advantages as a result. Brookings Institute fellow Indermit Gill goes so far as to say that whoever leads in AI in 2030 will “rule the world” until 2100. But Gill points out that in addition to considering each country’s strengths, we should consider how willing they are to improve upon their weaknesses.

While China leads in investment and the US in innovation, both nations are grappling with huge economic inequalities that could negatively impact technological uptake. “Attitudes toward the social change that accompanies new technologies matter as much as the technologies, pointing to the need for complementary policies that shape the economy and society,” Gill writes.

Will China’s leadership be willing to relax its grip to foster innovation? Will the US business environment be enough to compete with China’s data, investment, and education advantages? And can both countries find a way to distribute technology’s economic benefits more equitably?

Time will tell, but it seems we’ve got our work cut out for us—and China does too.

Image Credit: Adam Birkett on Unsplash Continue reading

Posted in Human Robots

#437946 Video Friday: These Robots Are Ready for ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online]
RoboSoft 2021 – April 12-16, 2021 – [Online]
Let us know if you have suggestions for next week, and enjoy today’s videos.

Is it too late to say, “Happy Holidays”? Yes! Is it too late for a post packed with holiday robot videos? Never!

The Autonomous Systems Lab at ETH Zurich wishes everyone a Merry Christmas and a Happy 2021!

Now you know the best kept secret in robotics- the ETH Zurich Autonomous Systems Lab is a shack in the woods. With an elevator.

[ ASL ]

We have had to do things differently this year, and the holiday season is no exception. But through it all, we still found ways to be together. From all of us at NATO, Happy Holidays. After training in the snow and mountains of Iceland, an EOD team returns to base. Passing signs reminding them to ‘Keep your distance’ due to COVID-19, they return to their office a little dejected, unsure how they can safely enjoy the holidays. But the EOD robot saves the day and finds a unique way to spread the holiday cheer – socially distanced, of course.

[ EATA ]

Season's Greetings from Voliro!

[ Voliro ]

Thanks Daniel!

Even if you don't have a robot at home, you can still make Halodi Robotics's gingerbread cookies the old fashioned way.

[ Halodi Robotics ]

Thanks Jesper!

We wish you all a Merry Christmas in this very different 2020. This year has truly changed the world and our way of living. We, Energy Robotics, like to say thank you to all our customers, partners, supporters, friends and family.

An Aibo ERS-7? Sweet!

[ Energy Robotics ]

Thanks Stefan!

The nickname for this drone should be “The Grinch.”

As it turns out, in real life taking samples of trees to determine how healthy they are is best done from the top.

[ DeLeaves ]

Thanks Alexis!

ETH Zurich would like to wish you happy holidays and a successful 2021 full of energy and health!

[ ETH Zurich ]

The QBrobotics Team wishes you all a Merry Christmas and a Happy New Year!

[ QBrobotics ]

Extend Robotics avatar twin got so excited opening a Christmas gift, using two arms coordinating, showing the dexterity and speed.

[ Extend Robotics ]

HEBI Robotics wishes everyone a great holiday season! Onto 2021!

[ HEBI Robotics ]

Christmas at the Mobile Robots Lab at Poznan Polytechnic.

[ Poznan ]

SWarm Holiday Wishes from the Hauert Lab!

[ Hauert Lab ]

Brubotics-VUB SMART and SHERO team wishes you a Merry Christmas and Happy 2021!

[ SMART ]

Success is all about teamwork! Thank you for supporting PAL Robotics. This festive season enjoy and stay safe!

[ PAL Robotics ]

Our robots wish you Happy Holidays! Starring world's first robot slackliner (Leonardo)!

[ Caltech ]

Happy Holidays and a Prosperous New Year from ZenRobotics!

[ ZenRobotics ]

Our Highly Dexterous Manipulation System (HDMS) dual-arm robot is ringing in the new year with good cheer!

[ RE2 Robotics ]

Happy Holidays 2020 from NAO!

[ SoftBank Robotics ]

Happy Holidays from DENSO Robotics!

[ DENSO ] Continue reading

Posted in Human Robots

#437940 How Boston Dynamics Taught Its Robots to ...

A week ago, Boston Dynamics posted a video of Atlas, Spot, and Handle dancing to “Do You Love Me.” It was, according to the video description, a way “to celebrate the start of what we hope will be a happier year.” As of today the video has been viewed nearly 24 million times, and the popularity is no surprise, considering the compelling mix of technical prowess and creativity on display.

Strictly speaking, the stuff going on in the video isn’t groundbreaking, in the sense that we’re not seeing any of the robots demonstrate fundamentally new capabilities, but that shouldn’t take away from how impressive it is—you’re seeing state-of-the-art in humanoid robotics, quadrupedal robotics, and whatever-the-heck-Handle-is robotics.

What is unique about this video from Boston Dynamics is the artistic component. We know that Atlas can do some practical tasks, and we know it can do some gymnastics and some parkour, but dancing is certainly something new. To learn more about what it took to make these dancing robots happen (and it’s much more complicated than it might seem), we spoke with Aaron Saunders, Boston Dynamics’ VP of Engineering.

Saunders started at Boston Dynamics in 2003, meaning that he’s been a fundamental part of a huge number of Boston Dynamics’ robots, even the ones you may have forgotten about. Remember LittleDog, for example? A team of two designed and built that adorable little quadruped, and Saunders was one of them.

While he’s been part of the Atlas project since the beginning (and had a hand in just about everything else that Boston Dynamics works on), Saunders has spent the last few years leading the Atlas team specifically, and he was kind enough to answer our questions about their dancing robots.

IEEE Spectrum: What’s your sense of how the Internet has been reacting to the video?

Aaron Saunders: We have different expectations for the videos that we make; this one was definitely anchored in fun for us. The response on YouTube was record-setting for us: We received hundreds of emails and calls with people expressing their enthusiasm, and also sharing their ideas for what we should do next, what about this song, what about this dance move, so that was really fun. My favorite reaction was one that I got from my 94-year-old grandma, who watched the video on YouTube and then sent a message through the family asking if I’d taught the robot those sweet moves. I think this video connected with a broader audience, because it mixed the old-school music with new technology.

We haven’t seen Atlas move like this before—can you talk about how you made it happen?

We started by working with dancers and a choreographer to create an initial concept for the dance by composing and assembling a routine. One of the challenges, and probably the core challenge for Atlas in particular, was adjusting human dance moves so that they could be performed on the robot. To do that, we used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go “that would be easy” or “that would be hard” or “that scares me.” And then we’d have a discussion, try different things in simulation, and make adjustments to find a compatible set of moves that we could execute on Atlas.

Throughout the project, the time frame for creating those new dance moves got shorter and shorter as we built tools, and as an example, eventually we were able to use that toolchain to create one of Atlas’ ballet moves in just one day, the day before we filmed, and it worked. So it’s not hand-scripted or hand-coded, it’s about having a pipeline that lets you take a diverse set of motions, that you can describe through a variety of different inputs, and push them through and onto the robot.

Image: Boston Dynamics

Were there some things that were particularly difficult to translate from human dancers to Atlas? Or, things that Atlas could do better than humans?

Some of the spinning turns in the ballet parts took more iterations to get to work, because they were the furthest from leaping and running and some of the other things that we have more experience with, so they challenged both the machine and the software in new ways. We definitely learned not to underestimate how flexible and strong dancers are—when you take elite athletes and you try to do what they do but with a robot, it’s a hard problem. It’s humbling. Fundamentally, I don’t think that Atlas has the range of motion or power that these athletes do, although we continue developing our robots towards that, because we believe that in order to broadly deploy these kinds of robots commercially, and eventually in a home, we think they need to have this level of performance.

One thing that robots are really good at is doing something over and over again the exact same way. So once we dialed in what we wanted to do, the robots could just do it again and again as we played with different camera angles.

I can understand how you could use human dancers to help you put together a routine with Atlas, but how did that work with Spot, and particularly with Handle?

I think the people we worked with actually had a lot of talent for thinking about motion, and thinking about how to express themselves through motion. And our robots do motion really well—they’re dynamic, they’re exciting, they balance. So I think what we found was that the dancers connected with the way the robots moved, and then shaped that into a story, and it didn’t matter whether there were two legs or four legs. When you don’t necessarily have a template of animal motion or human behavior, you just have to think a little harder about how to go about doing something, and that’s true for more pragmatic commercial behaviors as well.

“We used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go ‘that would be easy’ or ‘that would be hard’ or ‘that scares me.’”
—Aaron Saunders, Boston Dynamics

How does the experience that you get teaching robots to dance, or to do gymnastics or parkour, inform your approach to robotics for commercial applications?

We think that the skills inherent in dance and parkour, like agility, balance, and perception, are fundamental to a wide variety of robot applications. Maybe more importantly, finding that intersection between building a new robot capability and having fun has been Boston Dynamics’ recipe for robotics—it’s a great way to advance.

One good example is how when you push limits by asking your robots to do these dynamic motions over a period of several days, you learn a lot about the robustness of your hardware. Spot, through its productization, has become incredibly robust, and required almost no maintenance—it could just dance all day long once you taught it to. And the reason it’s so robust today is because of all those lessons we learned from previous things that may have just seemed weird and fun. You’ve got to go into uncharted territory to even know what you don’t know.

Image: Boston Dynamics

It’s often hard to tell from watching videos like these how much time it took to make things work the way you wanted them to, and how representative they are of the actual capabilities of the robots. Can you talk about that?

Let me try to answer in the context of this video, but I think the same is true for all of the videos that we post. We work hard to make something, and once it works, it works. For Atlas, most of the robot control existed from our previous work, like the work that we’ve done on parkour, which sent us down a path of using model predictive controllers that account for dynamics and balance. We used those to run on the robot a set of dance steps that we’d designed offline with the dancers and choreographer. So, a lot of time, months, we spent thinking about the dance and composing the motions and iterating in simulation.

Dancing required a lot of strength and speed, so we even upgraded some of Atlas’ hardware to give it more power. Dance might be the highest power thing we’ve done to date—even though you might think parkour looks way more explosive, the amount of motion and speed that you have in dance is incredible. That also took a lot of time over the course of months; creating the capability in the machine to go along with the capability in the algorithms.

Once we had the final sequence that you see in the video, we only filmed for two days. Much of that time was spent figuring out how to move the camera through a scene with a bunch of robots in it to capture one continuous two-minute shot, and while we ran and filmed the dance routine multiple times, we could repeat it quite reliably. There was no cutting or splicing in that opening two-minute shot.

There were definitely some failures in the hardware that required maintenance, and our robots stumbled and fell down sometimes. These behaviors are not meant to be productized and to be a 100 percent reliable, but they’re definitely repeatable. We try to be honest with showing things that we can do, not a snippet of something that we did once. I think there’s an honesty required in saying that you’ve achieved something, and that’s definitely important for us.

You mentioned that Spot is now robust enough to dance all day. How about Atlas? If you kept on replacing its batteries, could it dance all day, too?

Atlas, as a machine, is still, you know… there are only a handful of them in the world, they’re complicated, and reliability was not a main focus. We would definitely break the robot from time to time. But the robustness of the hardware, in the context of what we were trying to do, was really great. And without that robustness, we wouldn’t have been able to make the video at all. I think Atlas is a little more like a helicopter, where there’s a higher ratio between the time you spend doing maintenance and the time you spend operating. Whereas with Spot, the expectation is that it’s more like a car, where you can run it for a long time before you have to touch it.

When you’re teaching Atlas to do new things, is it using any kind of machine learning? And if not, why not?

As a company, we’ve explored a lot of things, but Atlas is not using a learning controller right now. I expect that a day will come when we will. Atlas’ current dance performance uses a mixture of what we like to call reflexive control, which is a combination of reacting to forces, online and offline trajectory optimization, and model predictive control. We leverage these techniques because they’re a reliable way of unlocking really high performance stuff, and we understand how to wield these tools really well. We haven’t found the end of the road in terms of what we can do with them.

We plan on using learning to extend and build on the foundation of software and hardware that we’ve developed, but I think that we, along with the community, are still trying to figure out where the right places to apply these tools are. I think you’ll see that as part of our natural progression.

Image: Boston Dynamics

Much of Atlas’ dynamic motion comes from its lower body at the moment, but parkour makes use of upper body strength and agility as well, and we’ve seen some recent concept images showing Atlas doing vaults and pullups. Can you tell us more?

Humans and animals do amazing things using their legs, but they do even more amazing things when they use their whole bodies. I think parkour provides a fantastic framework that allows us to progress towards whole body mobility. Walking and running was just the start of that journey. We’re progressing through more complex dynamic behaviors like jumping and spinning, that’s what we’ve been working on for the last couple of years. And the next step is to explore how using arms to push and pull on the world could extend that agility.

One of the missions that I’ve given to the Atlas team is to start working on leveraging the arms as much as we leverage the legs to enhance and extend our mobility, and I’m really excited about what we’re going to be working on over the next couple of years, because it’s going to open up a lot more opportunities for us to do exciting stuff with Atlas.

What’s your perspective on hydraulic versus electric actuators for highly dynamic robots?

Across my career at Boston Dynamics, I’ve felt passionately connected to so many different types of technology, but I’ve settled into a place where I really don’t think this is an either-or conversation anymore. I think the selection of actuator technology really depends on the size of the robot that you’re building, what you want that robot to do, where you want it to go, and many other factors. Ultimately, it’s good to have both kinds of actuators in your toolbox, and I love having access to both—and we’ve used both with great success to make really impressive dynamic machines.

I think the only delineation between hydraulic and electric actuators that appears to be distinct for me is probably in scale. It’s really challenging to make tiny hydraulic things because the industry just doesn’t do a lot of that, and the reciprocal is that the industry also doesn’t tend to make massive electrical things. So, you may find that to be a natural division between these two technologies.

Besides what you’re working on at Boston Dynamics, what recent robotics research are you most excited about?

For us as a company, we really love to follow advances in sensing, computer vision, terrain perception, these are all things where the better they get, the more we can do. For me personally, one of the things I like to follow is manipulation research, and in particular manipulation research that advances our understanding of complex, friction-based interactions like sliding and pushing, or moving compliant things like ropes.

We’re seeing a shift from just pinching things, lifting them, moving them, and dropping them, to much more meaningful interactions with the environment. Research in that type of manipulation I think is going to unlock the potential for mobile manipulators, and I think it’s really going to open up the ability for robots to interact with the world in a rich way.

Is there anything else you’d like people to take away from this video?

For me personally, and I think it’s because I spend so much of my time immersed in robotics and have a deep appreciation for what a robot is and what its capabilities and limitations are, one of my strong desires is for more people to spend more time with robots. We see a lot of opinions and ideas from people looking at our videos on YouTube, and it seems to me that if more people had opportunities to think about and learn about and spend time with robots, that new level of understanding could help them imagine new ways in which robots could be useful in our daily lives. I think the possibilities are really exciting, and I just want more people to be able to take that journey. Continue reading

Posted in Human Robots