#439736 Spot’s 3.0 Update Adds Increased ...

Spot’s 3.0 Update Adds Increased Autonomy, New Door Tricks

While Boston Dynamics' Atlas humanoid spends its time learning how to dance and do parkour, the company's Spot quadruped is quietly getting much better at doing useful, valuable tasks in commercial environments. Solving tasks like dynamic path planning and door manipulation in a way that's robust enough that someone can buy your robot and not regret it is, I would argue, just as difficult (if not more difficult) as getting a robot to do a backflip.

With a short blog post today, Boston Dynamics is announcing Spot Release 3.0, representing more than a year of software improvements over Release 2.0 that we covered back in May of 2020. The highlights of Release 3.0 include autonomous dynamic replanning, cloud integration, some clever camera tricks, and a new ability to handle push-bar doors, and earlier today, we spoke with Spot Chief Engineer at Boston Dynamics Zachary Jackowski to learn more about what Spot's been up to.

Here are some highlights from Spot's Release 3.0 software upgrade today, lifted from this blog post which has the entire list:

  • Mission planning: Save time by selecting which inspection actions you want Spot to perform, and it will take the shortest path to collect your data.
  • Dynamic replanning: Don't miss inspections due to changes on site. Spot will replan around blocked paths to make sure you get the data you need.
  • Repeatable image capture: Capture the same image from the same angle every time with scene-based camera alignment for the Spot CAM+ pan-tilt-zoom (PTZ) camera.
  • Cloud-compatible: Connect Spot to AWS, Azure, IBM Maximo, and other systems with existing or easy-to-build integrations.
  • Manipulation: Remotely operate the Spot Arm with ease through rear Spot CAM integration and split-screen view. Arm improvements also include added functionality for push-bar doors, revamped grasping UX, and updated SDK.
  • Sounds: Keep trained bystanders aware of Spot with configurable warning sounds.

The focus here is not just making Spot more autonomous, but making Spot more autonomous in some very specific ways that are targeted towards commercial usefulness. It's tempting to look at this stuff and say that it doesn't represent any massive new capabilities. But remember that Spot is a product, and its job is to make money, which is an enormous challenge for any robot, much less a relatively expensive quadruped.

Yellow and black four legged robot standing in a factory

For more details on the new release and a general update about Spot, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

IEEE Spectrum: So what's new with Spot 3.0, and why is this release important?

Zachary Jackowski: We've been focusing heavily on flexible autonomy that really works for our industrial customers. The thing that may not quite come through in the blog post is how iceberg-y making autonomy work on real customer sites is. Our blog post has some bullet points about "dynamic replanning" in maybe 20 words, but in doing that, we actually reengineered almost our entire autonomy system based on the failure modes of what we were seeing on our customer sites.

The biggest thing that changed is that previously, our robot mission paradigm was a linear mission where you would take the robot around your site and record a path. Obviously, that was a little bit fragile on complex sites—if you're on a construction site and someone puts a pallet in your path, you can't follow that path anymore. So we ended up engineering our autonomy system to do building scale mapping, which is a big part of why we're calling it Spot 3.0. This is state-of-the-art from an academic perspective, except that it's volume shipping in a real product, which to me represents a little bit of our insanity.

And one super cool technical nugget in this release is that we have a powerful pan/tilt/zoom camera on the robot that our customers use to take images of gauges and panels. We've added scene-based alignment and also computer vision model-based alignment so that the robot can capture the images from the same perspective, every time, perfectly framed. In pictures of the robot, you can see that there's this crash cage around the camera, but the image alignment stuff actually does inverse kinematics to command the robot's body to shift a little bit if the cage is including anything important in the frame.

When Spot is dynamically replanning around obstacles, how much flexibility does it have in where it goes?

There are a bunch of tricks to figuring out when to give up on a blocked path, and then it's very simple run of the mill route planning within an existing map. One of the really big design points of our system, which we spent a lot of time talking about during the design phase, is that it turns out in these high value facilities people really value predictability. So it's not desired that the robot starts wandering around trying to find its way somewhere.

Do you think that over time, your customers will begin to trust the robot with more autonomy and less predictability?

I think so, but there's a lot of trust to be built there. Our customers have to see the robot to do the job well for a significant amount of time, and that will come.

Can you talk a bit more about trying to do state-of-the-art work on a robot that's being deployed commercially?

I can tell you about how big the gap is. When we talk about features like this, our engineers are like, "oh yeah I could read this paper and pull this algorithm and code something up over a weekend and see it work." It's easy to get a feature to work once, make a really cool GIF, and post it to the engineering group chat room. But if you take a look at what it takes to actually ship a feature at product-level, we're talking person-years to have it reach the level of quality that someone is accustomed to buying an iPhone and just having it work perfectly all the time. You have to write all the code to product standards, implement all your tests, and get everything right there, and then you also have to visit a lot of customers, because the thing that's different about mobile robotics as a product is that it's all about how the system responds to environments that it hasn't seen before.

The blog post calls Spot 3.0 "A Sensing Solution for the Real World." What is the real world for Spot at this point, and how will that change going forward?

For Spot, 'real world' means power plants, electrical switch yards, chemical plants, breweries, automotive plants, and other living and breathing industrial facilities that have never considered the fact that a robot might one day be walking around in them. It's indoors, it's outdoors, in the dark and in direct sunlight. When you're talking about the geometric aspect of sites, that complexity we're getting pretty comfortable with.

I think the frontiers of complexity for us are things like, how do you work in a busy place with lots of untrained humans moving through it—that's an area where we're investing a lot, but it's going to be a big hill to climb and it'll take a little while before we're really comfortable in environments like that. Functional safety, certified person detectors, all that good stuff, that's a really juicy unsolved field.

Spot can now open push-bar doors, which seems like an easier problem than doors with handles, which Spot learned to open a while ago. Why'd you start with door handles first?

Push-bar doors is an easier problem! But being engineers, we did the harder problem first, because we wanted to get it done.[Read more…]

This entry was posted in Human Robots and tagged , , , . Bookmark the permalink.

Comments are closed.