Tag Archives: level

#438755 Soft Legged Robot Uses Pneumatic ...

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Generally, when people talk about soft robots, the robots are only mostly soft. There are some components that are very difficult to make soft, including pressure sources and the necessary electronics to direct that pressure between different soft actuators in a way that can be used for propulsion. What’s really cool about this robot is that researchers have managed to take a pressure source (either a single tether or an onboard CO2 cartridge) and direct it to four different legs, each with three different air chambers, using an oscillating three valve circuit made entirely of soft materials.

Photo: UCSD

The pneumatic circuit that powers and controls the soft quadruped.

The inspiration for this can be found in biology—natural organisms, including quadrupeds, use nervous system components called central pattern generators (CPGs) to prompt repetitive motions with limbs that are used for walking, flying, and swimming. This is obviously more complicated in some organisms than in others, and is typically mediated by sensory feedback, but the underlying structure of a CPG is basically just a repeating circuit that drives muscles in sequence to produce a stable, continuous gait. In this case, we’ve got pneumatic muscles being driven in opposing pairs, resulting in a diagonal couplet gait, where diagonally opposed limbs rotate forwards and backwards at the same time.

Diagram: Science Robotics

(J) Pneumatic logic circuit for rhythmic leg motion. A constant positive pressure source (P+) applied to three inverter components causes a high-pressure state to propagate around the circuit, with a delay at each inverter. While the input to one inverter is high, the attached actuator (i.e., A1, A2, or A3) is inflated. This sequence of high-pressure states causes each pair of legs of the robot to rotate in a direction determined by the pneumatic connections. (K) By reversing the sequence of activation of the pneumatic oscillator circuit, the attached actuators inflate in a new sequence (A1, A3, and A2), causing (L) the legs of the robot to rotate in reverse. (M) Schematic bottom view of the robot with the directions of leg motions indicated for forward walking.

Diagram: Science Robotics

Each of the valves acts as an inverter by switching the normally closed half (top) to open and the normally open half (bottom) to closed.

The circuit itself is made up of three bistable pneumatic valves connected by tubing that acts as a delay by providing resistance to the gas moving through it that can be adjusted by altering the tube’s length and inner diameter. Within the circuit, the movement of the pressurized gas acts as both a source of energy and as a signal, since wherever the pressure is in the circuit is where the legs are moving. The simplest circuit uses only three valves, and can keep the robot walking in one single direction, but more valves can add more complex leg control options. For example, the researchers were able to use seven valves to tune the phase offset of the gait, and even just one additional valve (albeit of a slightly more complex design) could enable reversal of the system, causing the robot to walk backwards in response to input from a soft sensor. And with another complex valve, a manual (tethered) controller could be used for omnidirectional movement.

This work has some similarities to the rover that JPL is developing to explore Venus—that rover isn’t a soft robot, of course, but it operates under similar constraints in that it can’t rely on conventional electronic systems for autonomous navigation or control. It turns out that there are plenty of clever ways to use mechanical (or in this case, pneumatic) intelligence to make robots with relatively complex autonomous behaviors, meaning that in the future, soft (or soft-ish) robots could find valuable roles in situations where using a non-compliant system is not a good option.

For more on why we should be so excited about soft robots and just how soft a soft robot needs to be, we spoke with Michael Tolley, who runs the Bioinspired Robotics and Design Lab at UCSD, and Dylan Drotman, the paper’s first author.

IEEE Spectrum: What can soft robots do for us that more rigid robotic designs can’t?

Michael Tolley: At the very highest level, one of the fundamental assumptions of robotics is that you have rigid bodies connected at joints, and all your motion happens at these joints. That's a really nice approach because it makes the math easy, frankly, and it simplifies control. But when you look around us in nature, even though animals do have bones and joints, the way we interact with the world is much more complicated than that simple story. I’m interested in where we can take advantage of material properties in robotics. If you look at robots that have to operate in very unknown environments, I think you can build in some of the intelligence for how to deal with those environments into the body of the robot itself. And that’s the category this work really falls under—it's about navigating the world.

Dylan Drotman: Walking through confined spaces is a good example. With the rigid legged robot, you would have to completely change the way that the legs move to walk through a confined space, while if you have flexible legs, like the robot in our paper, you can use relatively simple control strategies to squeeze through an area you wouldn’t be able to get through with a rigid system.

How smart can a soft robot get?

Drotman: Right now we have a sensor on the front that's connected through a fluidic transmission to a bistable valve that causes the robot to reverse. We could add other sensors around the robot to allow it to change direction whenever it runs into an obstacle to effectively make an electronics-free version of a Roomba.

Tolley: Stepping back a little bit from that, one could make an argument that we’re using basic memory elements to generate very basic signals. There’s nothing in principle that would stop someone from making a pneumatic computer—it’s just very complicated to make something that complex. I think you could build on this and do more intelligent decision making, but using this specific design and the components we’re using, it’s likely to be things that are more direct responses to the environment.

How well would robots like these scale down?

Drotman: At the moment we’re manufacturing these components by hand, so the idea would be to make something more like a printed circuit board instead, and looking at how the channel sizes and the valve design would affect the actuation properties. We’ll also be coming up with new circuits, and different designs for the circuits themselves.

Tolley: Down to centimeter or millimeter scale, I don’t think you’d have fundamental fluid flow problems. I think you’re going to be limited more by system design constraints. You’ll have to be able to locomote while carrying around your pressure source, and possibly some other components that are also still rigid. When you start to talk about really small scales, though, it's not as clear to me that you really need an intrinsically soft robot. If you think about insects, their structural geometry can make them behave like they’re soft, but they’re not intrinsically soft.

Should we be thinking about soft robots and compliant robots in the same way, or are they fundamentally different?

Tolley: There’s certainly a connection between the two. You could have a compliant robot that behaves in a very similar way to an intrinsically soft robot, or a robot made of intrinsically soft materials. At that point, it comes down to design and manufacturing and practical limitations on what you can make. I think when you get down to small scales, the two sort of get connected.

There was some interesting work several years ago on using explosions to power soft robots. Is that still a thing?

Tolley: One of the opportunities with soft robots is that with material compliance, you have the potential to store energy. I think there’s exciting potential there for rapid motion with a soft body. Combustion is one way of doing that with power coming from a chemical source all at once, but you could also use a relatively weak muscle that over time stores up energy in a soft body and then releases it.

Is it realistic to expect complete softness from soft robots, or will they likely always have rigid components because they have to store or generate and move pressurized gas somehow?

Tolley: If you look in nature, you do have soft pumps like the heart, but although it’s soft, it’s still relatively stiff. Like, if you grab a heart, it’s not totally squishy. I haven’t done it, but I’d imagine. If you have a container that you’re pressurizing, it has to be stiff enough to not just blow up like a balloon. Certainly pneumatics or hydraulics are not the only way to go for soft actuators; there has been some really nice work on smart muscles and smart materials like hydraulic electrostatic (HASEL) actuators. They seem promising, but all of these actuators have challenges. We’ve chosen to stick with pressurized pneumatics in the near term; longer term, I think you’ll start to see more of these smart material actuators become more practical.

Personally, I don’t have any problem with soft robots having some rigid components. Most animals on land have some rigid components, but they can still take advantage of being soft, so it’s probably going to be a combination. But I do also like the vision of making an entirely soft, squishy thing. Continue reading

Posted in Human Robots

#438731 Video Friday: Perseverance Lands on Mars

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

Hmm, did anything interesting happen in robotics yesterday week?

Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:

[ Mars 2020 ]

PLEN hopes you had a happy Valentine's Day!

[ PLEN ]

Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.

[ Unitree ]

Thanks Xingxing!

Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.

[ Spy in the Wild ]

Tails, it turns out, are useful for almost everything.

[ DART Lab ]

Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.

[ Extend Robotics ]

Fotokite is a safe, practical way to do local surveillance with a drone.

I just wish they still had a consumer version 🙁

[ Fotokite ]

How to confuse fish.

[ Harvard ]

Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.

[ Army ]

Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!

[ Flexiv ]

Thanks Yunfan!

An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.

[ Paper ]

Thanks Poramate!

A short film about Robot Wars from Blender Magazine in 1995.

[ YouTube ]

While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.

[ ETHZ ]

Thanks Markus!

Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area

[ IFRR ]

Thanks Fan!

Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.

[ IFRR ]

At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

[ UPenn ] Continue reading

Posted in Human Robots

#438080 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots

#438076 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots

#438012 Video Friday: These Robots Have Made 1 ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
Let us know if you have suggestions for next week, and enjoy today's videos.

We're proud to announce Starship Delivery Robots have now completed 1,000,000 autonomous deliveries around the world. We were unsure where the one millionth delivery was going to take place, as there are around 15-20 service areas open globally, all with robots doing deliveries every minute. In the end it took place at Bowling Green, Ohio, to a student called Annika Keeton who is a freshman studying pre-health Biology at BGSU. Annika is now part of Starship’s history!

[ Starship ]

I adore this little DIY walking robot- with modular feet and little dials to let you easily adjust the walking parameters, it's an affordable kit that's way more nuanced than most.

It's called Bakiwi, and it costs €95. A squee cover made from feathers or fur is an extra €17. Here's a more serious look at what it can do:

[ Bakiwi ]

Thanks Oswald!

Savva Morozov, an AeroAstro junior, works on autonomous navigation for the MIT mini cheetah robot and reflects on the value of a crowded Infinite Corridor.

[ MIT ]

The world's most advanced haptic feedback gloves just got a huge upgrade! HaptX Gloves DK2 achieves a level of realism that other haptic devices can't match. Whether you’re training your workforce, designing a new product, or controlling robots from a distance, HaptX Gloves make it feel real.

They're the only gloves with true-contact haptics, with patented technology that displace your skin the same way a real object would. With 133 points of tactile feedback per hand, for full palm and fingertip coverage. HaptX Gloves DK2 feature the industry's most powerful force feedback, ~2X the strength of other force feedback gloves. They're also the most accurate motion tracking gloves, with 30 tracked degrees of freedom, sub-millimeter precision, no perceivable latency, and no occlusion.

[ HaptX ]

Yardroid is an outdoor robot “guided by computer vision and artificial intelligence” that seems like it can do almost everything.

These are a lot of autonomous capabilities, but so far, we've only seen the video. So, best not to get too excited until we know more about how it works.

[ Yardroid ]

Thanks Dan!

Since as far as we know, Pepper can't spread COVID, it had a busy year.

I somehow missed seeing that chimpanzee magic show, but here it is:

[ Simon Pierro ] via [ SoftBank Robotics ]

In spite of the pandemic, Professor Hod Lipson’s Robotics Studio persevered and even thrived— learning to work on global teams, to develop protocols for sharing blueprints and code, and to test, evaluate, and refine their designs remotely. Equipped with a 3D printer and a kit of electronics prototyping equipment, our students engineered bipedal robots that were conceptualized, fabricated, programmed, and endlessly iterated around the globe in bedrooms, kitchens, backyards, and any other makeshift laboratory you can imagine.

[ Hod Lipson ]

Thanks Fan!

We all know how much quadrupeds love ice!

[ Ghost Robotics ]

We took the opportunity of the last storm to put the Warthog in the snow of Université Laval. Enjoy!

[ Norlab ]

They've got a long way to go, but autonomous indoor firefighting drones seem like a fantastic idea.

[ CTU ]

Individual manipulators are limited by their vertical total load capacity. This places a fundamental limit on the weight of loads that a single manipulator can move. Cooperative manipulation with two arms has the potential to increase the net weight capacity of the overall system. However, it is critical that proper load sharing takes place between the two arms. In this work, we outline a method that utilizes mechanical intelligence in the form of a whiffletree.

And your word of the day is whiffletree, which is “a mechanism to distribute force evenly through linkages.”

[ DART Lab ]

Thanks Raymond!

Some highlights of robotic projects at FZI in 2020, all using ROS.

[ FZI ]

Thanks Fan!

iRobot CEO Colin Angle threatens my job by sharing some cool robots.

[ iRobot ]

A fascinating new talk from Henry Evans on robotic caregivers.

[ HRL ]

The ANA Avatar XPRIZE semifinals selection submission for Team AVATRINA. The setting is a mock clinic, with the patient sitting on a wheelchair and nurse having completed an initial intake. Avatar enters the room controlled by operator (Doctor). A rolling tray table with medical supplies (stethoscope, pulse oximeter, digital thermometer, oxygen mask, oxygen tube) is by the patient’s side. Demonstrates head tracking, stereo vision, fine manipulation, bimanual manipulation, safe impedance control, and navigation.

[ Team AVATRINA ]

This five year old talk from Mikell Taylor, who wrote for us a while back and is now at Amazon Robotics, is entitled “Nobody Cares About Your Robot.” For better or worse, it really doesn't sound like it was written five years ago.

Robotics for the consumer market – Mikell Taylor from Scott Handsaker on Vimeo.

[ Mikell Taylor ]

Fall River Community Media presents this wonderful guy talking about his love of antique robot toys.

If you enjoy this kind of slow media, Fall River also has weekly Hot Dogs Cool Cats adoption profiles that are super relaxing to watch.

[ YouTube ] Continue reading

Posted in Human Robots