Tag Archives: help

#438294 Video Friday: New Entertainment Robot ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

Engineered Arts' latest Mesmer entertainment robot is Cleo. It sings, gesticulates, and even does impressions.

[ Engineered Arts ]

I do not know what this thing is or what it's saying but Panasonic is going to be selling them and I will pay WHATEVER. IT. COSTS.

Slightly worrisome is that Google Translate persistently thinks that part of the description involves “sleeping and flatulence.”

[ Panasonic ] via [ RobotStart ]

Spot Enterprise is here to help you safely ignore every alarm that goes off at work while you're snug at home in your jammies drinking cocoa.

That Spot needs a bath.

If you missed the launch event (with more on the arm), check it out here:

[ Boston Dynamics ]

PHASA-35, a 35m wingspan solar-electric aircraft successfully completed its maiden flight in Australia, February 2020. Designed to operate unmanned in the stratosphere, above the weather and conventional air traffic, PHASA-35 offers a persistent and affordable alternative to satellites combined with the flexibility of an aircraft, which could be used for a range of valuable applications including forest fire detection and maritime surveillance.

[ BAE Systems ]

As part of the Army Research Lab’s (ARL) Robotics Collaborative Technology Alliance (RCTA), we are developing new planning and control algorithms for quadrupedal robots. The goal of our project is to equip the robot LLAMA, developed by NASA JPL, with the skills it needs to move at operational tempo over difficult terrain to keep up with a human squad. This requires innovative perception, planning, and control techniques to make the robot both precise in execution for navigating technical obstacles and robust enough to reject disturbances and recover from unknown errors.

[ IHMC ]

Watch what happens to this drone when it tries to install a bird diverter on a high voltage power line:

[ GRVC ]

Soldiers navigate a wide variety of terrains to successfully complete their missions. As human/agent teaming and artificial intelligence advance, the same flexibility will be required of robots to maneuver across diverse terrain and become effective combat teammates.

[ Army ]

The goal of the GRIFFIN project is to create something similar to sort of robotic bird, which almost certainly won't look like this concept rendering.

While I think this research is great, at what point is it in fact easier to just, you know, train an actual bird?

[ GRIFFIN ]

Paul Newman narrates this video from two decades ago, which is a pretty neat trick.

[ Oxford Robotics Institute ]

The first step towards a LEGO-based robotic McMuffin creator is cracking and separating eggs.

[ Astonishing Studios ] via [ BB ]

Some interesting soft robotics projects at the University of Southern Denmark.

[ SDU ]

Chong Liu introduces Creature_02, his final presentation for Hod Lipson's Robotics Studio course at Columbia.

[ Chong Liu ]

The world needs more robot blimps.

[ Lab INIT Robots ]

Finishing its duty early, the KR CYBERTECH nano uses this time to play basketball.

[ Kuka ]

senseFly has a new aerial surveying drone that they call “affordable,” although they don't say what the price is.

[ senseFly ]

In summer 2020 participated several science teams of the ETH Zurich at the “Art Safiental” in the mountains of Graubunden. After the scientists packed their hiking gear and their robots, their only mission was “over hill and dale to the summit”. How difficult will it be to reach the summit with a legged robot and an exosceletton? What's the relation of synesthetic dance and robotic? How will the hikers react to these projects?

[ Rienerschnitzel Films ]

Thanks Robert!

Karen Liu: How robots perceive the physical world. A specialist in computer animation expounds upon her rapidly evolving specialty, known as physics-based simulation, and how it is helping robots become more physically aware of the world around them.

[ Stanford ]

This week's UPenn GRASP On Robotics seminar is by Maria Chiara Carrozza from Scuola Superiore Sant’Anna, on “Biorobotics for Personal Assistance – Translational Research and Opportunities for Human-Centered Developments.”

The seminar will focus on the opportunities and challenges offered by the digital transformation of healthcare which was accelerated in the COVID-19 Pandemia. In this framework rehabilitation and social robotics can play a fundamental role as enabling technologies for providing innovative therapies and services to patients even at home or in remote environments.

[ UPenn ] Continue reading

Posted in Human Robots

#438080 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots

#438076 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots

#438014 Meet Blueswarm, a Smart School of ...

Anyone who’s seen an undersea nature documentary has marveled at the complex choreography that schooling fish display, a darting, synchronized ballet with a cast of thousands.

Those instinctive movements have inspired researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Wyss Institute for Biologically Inspired Engineering. The results could improve the performance and dependability of not just underwater robots, but other vehicles that require decentralized locomotion and organization, such as self-driving cars and robotic space exploration.

The fish collective called Blueswarm was created by a team led by Radhika Nagpal, whose lab is a pioneer in self-organizing systems. The oddly adorable robots can sync their movements like biological fish, taking cues from their plastic-bodied neighbors with no external controls required. Nagpal told IEEE Spectrum that this marks a milestone, demonstrating complex 3D behaviors with implicit coordination in underwater robots.

“Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs,” Nagpal said. “This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research is published in Science Robotics, with Florian Berlinger as first author. Berlinger said the “Bluedot” robots integrate a trio of blue LED lights, a lithium-polymer battery, a pair of cameras, a Raspberry Pi computer and four controllable fins within a 3D-printed hull. The fish-lens cameras detect LED’s of their fellow swimmers, and apply a custom algorithm to calculate distance, direction and heading.

Based on that simple production and detection of LED light, the team proved that Blueswarm could self-organize behaviors, including aggregation, dispersal and circle formation—basically, swimming in a clockwise synchronization. Researchers also simulated a successful search mission, an autonomous Finding Nemo. Using their dispersion algorithm, the robot school spread out until one could detect a red light in the tank. Its blue LEDs then flashed, triggering the aggregation algorithm to gather the school around it. Such a robot swarm might prove valuable in search-and-rescue missions at sea, covering miles of open water and reporting back to its mates.

“Each Bluebot implicitly reacts to its neighbors’ positions,” Berlinger said. The fish—RoboCod, perhaps?—also integrate a Wifi module to allow uploading new behaviors remotely. The lab’s previous efforts include a 1,000-strong army of “Kilobots,” and a robotic construction crew inspired by termites. Both projects operated in two-dimensional space. But a 3D environment like air or water posed a tougher challenge for sensing and movement.

In nature, Berlinger notes, there’s no scaly CEO to direct the school’s movements. Nor do fish communicate their intentions. Instead, so-called “implicit coordination” guides the school’s collective behavior, with individual members executing high-speed moves based on what they see their neighbors doing. That decentralized, autonomous organization has long fascinated scientists, including in robotics.

“In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system with a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

Berlinger adds the research could one day translate to anything that requires decentralized robots, from self-driving cars and Amazon warehouse vehicles to exploration of faraway planets, where poor latency makes it impossible to transmit commands quickly. Today’s semi-autonomous cars face their own technical hurdles in reliably sensing and responding to their complex environments, including when foul weather obscures onboard sensors or road markers, or when they can’t fix position via GPS. An entire subset of autonomous-car research involves vehicle-to-vehicle (V2V) communications that could give cars a hive mind to guide individual or collective decisions— avoiding snarled traffic, driving safely in tight convoys, or taking group evasive action during a crash that’s beyond their sensory range.

“Once we have millions of cars on the road, there can’t be one computer orchestrating all the traffic, making decisions that work for all the cars,” Berlinger said.

The miniature robots could also work long hours in places that are inaccessible to humans and divers, or even large tethered robots. Nagpal said the synthetic swimmers could monitor and collect data on reefs or underwater infrastructure 24/7, and work into tiny places without disturbing fragile equipment or ecosystems.

“If we could be as good as fish in that environment, we could collect information and be non-invasive, in cluttered environments where everything is an obstacle,” Nagpal said. Continue reading

Posted in Human Robots

#438006 Smellicopter Drone Uses Live Moth ...

Research into robotic sensing has, understandably I guess, been very human-centric. Most of us navigate and experience the world visually and in 3D, so robots tend to get covered with things like cameras and lidar. Touch is important to us, as is sound, so robots are getting pretty good with understanding tactile and auditory information, too. Smell, though? In most cases, smell doesn’t convey nearly as much information for us, so while it hasn’t exactly been ignored in robotics, it certainly isn’t the sensing modality of choice in most cases.

Part of the problem with smell sensing is that we just don’t have a good way of doing it, from a technical perspective. This has been a challenge for a long time, and it’s why we either bribe or trick animals like dogs, rats, vultures, and other animals to be our sensing systems for airborne chemicals. If only they’d do exactly what we wanted them to do all the time, this would be fine, but they don’t, so it’s not.

Until we get better at making chemical sensors, leveraging biology is the best we can do, and what would be ideal would be some sort of robot-animal hybrid cyborg thing. We’ve seen some attempts at remote controlled insects, but as it turns out, you can simplify things if you don’t use the entire insect, but instead just find a way to use its sensing system. Enter the Smellicopter.

There’s honestly not too much to say about the drone itself. It’s an open-source drone project called Crazyflie 2.0, with some additional off the shelf sensors for obstacle avoidance and stabilization. The interesting bits are a couple of passive fins that keep the drone pointed into the wind, and then the sensor, called an electroantennogram.

Image: UW

The drone’s sensor, called an electroantennogram, consists of a “single excised antenna” from a Manduca sexta hawkmoth and a custom signal processing circuit.

To make one of these sensors, you just, uh, “harvest” an antenna from a live hawkmoth. Obligingly, the moth antenna is hollow, meaning that you can stick electrodes up it. Whenever the olfactory neurons in the antenna (which is still technically alive even though it’s not attached to the moth anymore) encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up. Plug the other ends of the electrodes into a voltage amplifier and filter, run it through an analog to digital converter, and you’ve got a chemical sensor that weighs just 1.5 gram and consumes only 2.7 mW of power. It’s significantly more sensitive than a conventional metal-oxide odor sensor, in a much smaller and more efficient form factor, making it ideal for drones.

To localize an odor, the Smellicopter uses a simple bioinspired approach called crosswind casting, which involves moving laterally left and right and then forward when an odor is detected. Here’s how it works:

The vehicle takes off to a height of 40 cm and then hovers for ten seconds to allow it time to orient upwind. The smellicopter starts casting left and right crosswind. When a volatile chemical is detected, the smellicopter will surge 25 cm upwind, and then resume casting. As long as the wind direction is fairly consistent, this strategy will bring the insect or robot increasingly closer to a singular source with each surge.

Since odors are airborne, they need a bit of a breeze to spread very far, and the Smellicopter won’t be able to detect them unless it’s downwind of the source. But, that’s just how odors work— even if you’re right next to the source, if the wind is blowing from you towards the source rather than the other way around, you might not catch a whiff of it.

Whenever the olfactory neurons in the antenna encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up

There are a few other constraints to keep in mind with this sensor as well. First, rather than detecting something useful (like explosives), it’s going to detect the smells of pretty flowers, because moths like pretty flowers. Second, the antenna will literally go dead on you within a couple hours, since it only functions while its tissues are alive and metaphorically kicking. Interestingly, it may be possible to use CRISPR-based genetic modification to breed moths with antennae that do respond to useful smells, which would be a neat trick, and we asked the researchers—Melanie Anderson, a doctoral student of mechanical engineering at the University of Washington, in Seattle; Thomas Daniel, a UW professor of biology; and Sawyer Fuller, a UW assistant professor of mechanical engineering—about this, along with some other burning questions, via email.

IEEE Spectrum, asking the important questions first: So who came up with “Smellicopter”?

Melanie Anderson: Tom Daniel coined the term “Smellicopter”. Another runner up was “OdorRotor”!

In general, how much better are moths at odor localization than robots?

Melanie Anderson: Moths are excellent at odor detection and odor localization and need to be in order to find mates and food. Their antennae are much more sensitive and specialized than any portable man-made odor sensor. We can't ask the moths how exactly they search for odors so well, but being able to have the odor sensitivity of a moth on a flying platform is a big step in that direction.

Tom Daniel: Our best estimate is that they outperform robotic sensing by at least three orders of magnitude.

How does the localization behavior of the Smellicopter compare to that of a real moth?

Anderson: The cast-and-surge odor search strategy is a simplified version of what we believe the moth (and many other odor searching animals) are doing. It is a reactive strategy that relies on the knowledge that if you detect odor, you can assume that the source is somewhere up-wind of you. When you detect odor, you simply move upwind, and when you lose the odor signal you cast in a cross-wind direction until you regain the signal.

Can you elaborate on the potential for CRISPR to be able to engineer moths for the detection of specific chemicals?

Anderson: CRISPR is already currently being used to modify the odor detection pathways in moth species. It is one of our future efforts to specifically use this to make the antennae sensitive to other chemicals of interest, such as the chemical scent of explosives.

Sawyer Fuller: We think that one of the strengths of using a moth's antenna, in addition to its speed, is that it may provide a path to both high chemical specificity as well as high sensitivity. By expressing a preponderance of only one or a few chemosensors, we are anticipating that a moth antenna will give a strong response only to that chemical. There are several efforts underway in other research groups to make such specific, sensitive chemical detectors. Chemical sensing is an area where biology exceeds man-made systems in terms of efficiency, small size, and sensitivity. So that's why we think that the approach of trying to leverage biological machinery that already exists has some merit.

You mention that the antennae lifespan can be extended for a few days with ice- how feasible do you think this technology is outside of a research context?

Anderson: The antennae can be stored in tiny vials in a standard refrigerator or just with an ice pack to extend their life to about a week. Additionally, the process for attaching the antenna to the electrical circuit is a teachable skill. It is definitely feasible outside of a research context.

Considering the trajectory that sensor development is on, how long do you think that this biological sensor system will outperform conventional alternatives?

Anderson: It's hard to speak toward what will happen in the future, but currently, the moth antenna still stands out among any commercially-available portable sensors.

There have been some experiments with cybernetic insects; what are the advantages and disadvantages of your approach, as opposed to (say) putting some sort of tracking system on a live moth?

Daniel: I was part of a cyber insect team a number of years ago. The challenge of such research is that the animal has natural reactions to attempts to steer or control it.

Anderson: While moths are better at odor tracking than robots currently, the advantage of the drone platform is that we have control over it. We can tell it to constrain the search to a certain area, and return after it finishes searching.

What can you tell us about the health, happiness, and overall wellfare of the moths in your experiments?

Anderson: The moths are cold anesthetized before the antennae are removed. They are then frozen so that they can be used for teaching purposes or in other research efforts.

What are you working on next?

Daniel: The four big efforts are (1) CRISPR modification, (2) experiments aimed at improving the longevity of the antennal preparation, (3) improved measurements of antennal electrical responses to odors combined with machine learning to see if we can classify different odors, and (4) flight in outdoor environments.

Fuller: The moth's antenna sensor gives us a new ability to sense with a much shorter latency than was previously possible with similarly-sized sensors (e.g. semiconductor sensors). What exactly a robot agent should do to best take advantage of this is an open question. In particular, I think the speed may help it to zero in on plume sources in complex environments much more quickly. Think of places like indoor settings with flow down hallways that splits out at doorways, and in industrial settings festooned with pipes and equipment. We know that it is possible to search out and find odors in such scenarios, as anybody who has had to contend with an outbreak of fruit flies can attest. It is also known that these animals respond very quickly to sudden changes in odor that is present in such turbulent, patchy plumes. Since it is hard to reduce such plumes to a simple model, we think that machine learning may provide insights into how to best take advantage of the improved temporal plume information we now have available.

Tom Daniel also points out that the relative simplicity of this project (now that the UW researchers have it all figured out, that is) means that even high school students could potentially get involved in it, even if it’s on a ground robot rather than a drone. All the details are in the paper that was just published in Bioinspiration & Biomimetics. Continue reading

Posted in Human Robots