Tag Archives: Handle

#439055 Stretch Is Boston Dynamics’ Take on a ...

Today, Boston Dynamics is announcing Stretch, a mobile robot designed to autonomously move boxes around warehouses. At first glance, you might be wondering why the heck this is a Boston Dynamics robot at all, since the dynamic mobility that we associate with most of their platforms is notably absent. The combination of strength and speed in Stretch’s arm is something we haven’t seen before in a mobile robot, and it’s what makes this a unique and potentially exciting entry into the warehouse robotics space.

Useful mobile manipulation in any environment that’s not almost entirely structured is still a significant challenge in robotics, and it requires a very difficult combination of sensing, intelligence, and dynamic motion, all of which are classic Boston Dynamics. But also classic Boston Dynamics is building really cool platforms, and only later trying to figure out a way of making them commercially viable. So why Stretch, why boxes, why now, and (the real question) why not Handle? We talk with Boston Dynamics’ Vice President of Product Engineering Kevin Blankespoor to find out.

Stretch is very explicitly a box-handling mobile robot for relatively well structured warehouses. It’s in no way designed to be a generalist that many of Boston Dynamics’ other robots are. And to be fair, this is absolutely how to make a robot that’s practical and cost effective right out of the crate: Identify a task that is dull or dirty or dangerous for humans, design a robot to do that task safely and efficiently, and deploy it with the expectation that it’ll be really good at that task but not necessarily much else. This is a very different approach than a robot like Spot, where the platform came first and the practical applications came later—with Stretch, it’s all about that specific task in a specific environment.

There are already robotic solutions for truck unloading, palletizing, and depalletizing, but Stretch seems to be uniquely capable. For truck unloading, the highest performance systems that I’m aware of are monstrous things (here’s one example from Honeywell) that use a ton of custom hardware to just sort of ingest the cargo within a trailer all at once. In a highly structured and predictable warehouse, this sort of thing may pay off over the long term, but it’s going to be extremely expensive and not very versatile at all.

Palletizing and depalletizing robots are much more common in warehouses today. They’re almost always large industrial arms surrounded by a network of custom conveyor belts and whatnot, suffering from the same sorts of constraints as a truck unloader— very capable in some situations, but generally high cost and low flexibility.

Photo: Boston Dynamics

Stretch is probably not going to be able to compete with either of these types of dedicated systems when it comes to sheer speed, but it offers lots of other critical advantages: It’s fast and easy to deploy, easy to use, and adaptable to a variety of different tasks without costly infrastructure changes. It’s also very much not Handle, which was Boston Dynamics’ earlier (although not that much earlier) attempt at a box-handling robot for warehouses, and (let’s be honest here) a much more Boston Dynamics-y thing than Stretch seems to be. To learn more about why the answer is Stretch rather than Handle, and how Stretch will fit into the warehouse of the very near future, we spoke with Kevin Blankespoor, Boston Dynamics’ VP of Product Engineering and chief engineer for both Handle and Stretch.

IEEE Spectrum: Tell me about Stretch!

Kevin Blankespoor: Stretch is the first mobile robot that we’ve designed specifically for the warehouse. It’s all about moving boxes. Stretch is a flexible robot that can move throughout the warehouse and do different tasks. During a typical day in the life of Stretch in the future, it might spend the morning on the inbound side of the warehouse unloading boxes from trucks. It might spend the afternoon in the aisles of the warehouse building up pallets to go to retailers and e-commerce facilities, and it might spend the evening on the outbound side of the warehouse loading boxes into the trucks. So, it really goes to where the work is.

There are already other robots that include truck unloading robots, palletizing and depalletizing robots, and mobile bases with arms on them. What makes Boston Dynamics the right company to introduce a new robot in this space?

We definitely thought through this, because there are already autonomous mobile robots [AMRs] out there. Most of them, though, are more like pallet movers or tote movers—they don't have an arm, and most of them are really just about moving something from point A to point B without manipulation capability. We've seen some experiments where people put arms on AMRs, but nothing that's made it very far in the market. And so when we started looking at Stretch, we realized we really needed to make a custom robot, and that it was something we could do quickly.

“We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.”

Stretch is built with pieces from Spot and Atlas and that gave us a big head start. For example, if you look at Stretch’s vision system, it's 2D cameras, depth sensors, and software that allows it to do obstacle detection, box detection, and localization. Those are all the same sensors and software that we've been using for years on our legged robots. And if you look closely at Stretch’s wrist joints, they're actually the same as Spot’s hips. They use the same electric motors, the same gearboxes, the same sensors, and they even have the same closed-loop controller controlling the joints.

If you were to buy an existing industrial robot arm with this kind of performance, it would be about four times heavier than the arm we built, and it's really hard to make that into a mobile robot. A lot of this came from our leg technology because it’s so important for our leg designs to be lightweight for the robots to balance. We took that same strength to weight advantage that we have, and built it into this arm. We're able to rapidly piece together things from our other robots to get us out of the gate quickly, so even though this looks like a totally different robot, we think we have a good head start going into this market.

At what point did you decide to go with an arm on a statically stable base on Stretch, rather than something more, you know, dynamic-y?

Stretch looks really different than the robots that Boston Dynamics has done in the past. But you'd be surprised how much similarity there is between our legged robots and Stretch under the hood. Looking back, we actually got our start on moving boxes with Atlas, and at that point it was just research and development. We were really trying to do force control for box grasping. We were picking up heavy boxes and maintaining balance and working on those fundamentals. We released a video of that as our first next-gen Atlas video, and it was interesting. We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.

So at this point we actually came up with Handle. The intent of Handle was to do a couple things—one was, we thought we could build a simpler robot that had Atlas’ attributes. Handle has a small footprint so it can fit in tight spaces, but it can pick up heavy boxes. And in addition to that, we had always really wanted to combine wheels and legs. We’d been talking about doing that for a decade and so Handle was a chance for us to try it.

We built a couple versions of Handle, and the first one was really just a prototype to kind of explore the morphology. But the second one was more purpose-built for warehouse tasks, and we started building pallets with that one and it looked pretty good. And then we started doing truck unloading with Handle, which was the pivotal moment. Handle could do it, but it took too long. Every time Handle grasped a box, it would have to roll back and then get to a place where it could spin itself to face forward and place the box, and trucks are very tight for a robot this size, so there's not a lot of room to maneuver. We knew the whole time that there was a robot like Stretch that was another alternative, but that's really when it became clear that Stretch would have a lot of advantages, and we started working on it about a year ago.

Stretch is certainly impressive in a practical way, but I’ll admit to really hoping that something like Handle could have turned out to be a viable warehouse robot.

I love the Handle project as well, and I’m very passionate about that robot. And there was a stage before we built Stretch where we thought, “this would be pretty standard looking compared to Handle, is it going to capture enough of the Boston Dynamics secret sauce?” But when you actually dissect all the problems within Stretch that you have to tackle, there are a lot of cool robotics problems left in there—the vision system, the planning, the manipulation, the grasping of the boxes—it's a lot harder to solve than it looks, and we're excited that we're actually getting fairly far down that road now.

What happens to Handle now?

Stretch has really taken over our team as far as warehouse products go. Handle we still use occasionally as a research robot, but it’s not actively under development. Stretch is really Handle’s descendent. Handle’s not retired, exactly, but we’re just using it for things like the dance video.

There’s still potential to do cool stuff with Handle. I do think that combining wheels with legs is very cool, and largely unexplored compared to its potential. So I still think that you're gonna see versions of robots combining wheels and legs like Handle, and maybe a version of Handle in the future that does more of that. But because we're switching this thread from research into product, Stretch is really the main focus now.

How autonomous is Stretch?

Stretch is semi-autonomous, and that means it really needs to work with people to tap into its full potential. With truck unloading, for example, a person will drive Stretch into the back of the truck and then basically point Stretch in the right direction and say go. And from that point on, everything’s autonomous. Stretch has its vision system and its mobility and it can detect all the boxes, grasp all boxes, and move them onto a conveyor all autonomously. This is something that takes people hours to do manually, and Stretch can go all the way until it gets to the last box, and the truck is empty. There are some parts of the truck unloading task that do require people, like verifying that the truck is in the right place and opening the doors. But this takes a person just a few minutes, and then the robot can spend hours or as long as it takes to do its job autonomously.

There are also other tasks in the warehouse where the autonomy will increase in the future. After truck unloading, the second thing we’ll take on is order building, which will be more in the aisles of a warehouse. For that, Stretch will be navigating around the warehouse, finding the right pallet it needs to take a box from, and loading it onto a new pallet. This will be a different model with more autonomy; you’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.

What kinds of constraints is Stretch operating under? Do the boxes all have to be stacked neatly in the back of the truck, do they have to be the same size, the same color, etc?

“This will be a different model with more autonomy. You’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.”

If you think about manufacturing, where there's been automation for decades, you can go into a modern manufacturing facility and there are robot arms and conveyors and other machines. But if you look at the actual warehouse space, 90+ percent is manually operated, and that's because of what you just asked about— things that are less structured, where there’s more variety, and it's more challenging for a robot. But this is starting to change. This is really, really early days, and you’re going to be seeing a lot more robots in the warehouse space.

The warehouse robotics industry is going to grow a lot over the next decade, and a lot of that boils down to vision—the ability for robots to navigate and to understand what they’re seeing. Actually seeing boxes in real world scenarios is challenging, especially when there's a lot of variety. We've been testing our machine learning-based box detection system on Pick for a few years now, and it's gotten far enough that we know it’s one of the technical hurdles you need to overcome to succeed in the warehouse.

Can you compare the performance of Stretch to the performance of a human in a box-unloading task?

Stretch can move cases up to 50 pounds which is the OSHA limit for how much a single person's allowed to move. The peak case rate for Stretch is 800 cases per hour. You really need to keep up with the flow of goods throughout the warehouse, and 800 cases per hour should be enough for most applications. This is similar to a really good human; most humans are probably slower, and it’s hard for a human to sustain that rate, and one of the big issues with people doing this jobs is injury rates. Imagine moving really heavy boxes all day, and having to reach up high or bend down to get them—injuries are really common in this area. Truck unloading is one of the hardest jobs in a warehouse, and that’s one of the reasons we’re starting there with Stretch.

Is Stretch safe for humans to be around?

We looked at using collaborative robot arms for Stretch, but they don’t have the combination of strength and speed and reach to do this task. That’s partially just due to the laws of physics—if you want to move a 50lb box really fast, that’s a lot of energy there. So, Stretch does need to maintain separation from humans, but it’s pretty safe when it’s operating in the back of a truck.

In the middle of a warehouse, Stretch will have a couple different modes. When it's traveling around it'll be kind of like an AMR, and use a safety-rated lidar making sure that it slows down or stops as people get closer. If it's parked and the arm is moving, it'll do the same thing, monitoring anyone getting close and either slow down or stop.

How do you see Stretch interacting with other warehouse robots?

For building pallet orders, we can do that in a couple of different ways, and we’re experimenting with partners in the AMR space. So you might have an AMR that moves the pallet around and then rendezvous with Stretch, and Stretch does the manipulation part and moves boxes onto the pallet, and then the AMR scuttles off to the next rendezvous point where maybe a different Stretch meets it. We’re developing prototypes of that behavior now with a few partners. Another way to do it is Stretch can actually pull the pallet around itself and do both tasks. There are two fundamental things that happen in the warehouse: there's movement of goods, and there's manipulation of goods, and Stretch can do both.

You’re aware that Hello Robot has a mobile manipulator called Stretch, right?

Great minds think alike! We know Aaron [Edsinger] from the Google days; we all used to be in the same company, and he’s a great guy. We’re in very different applications and spaces, though— Aaron’s robot is going into research and maybe a little bit into the consumer space, while this robot is on a much bigger scale aimed at industrial applications, so I think there’s actually a lot of space between our robots, in terms of how they’ll be used.

Editor’s Note: We did check in with Aaron Edsinger at Hello Robot, and he sees things a little bit differently. “We're disappointed they chose our name for their robot,” Edsinger told us. “We're seriously concerned about it and considering our options.” We sincerely hope that Boston Dynamics and Hello Robot can come to an amicable solution on this.
What’s the timeline for commercial deployment of Stretch?

This is a prototype of the Stretch robot, and anytime we design a new robot, we always like to build a prototype as quickly as possible so we can figure out what works and what doesn't work. We did that with our bipeds and quadrupeds as well. So, we get an early look at what we need to iterate, because any time you build the first thing, it's not the right thing, and you always need to make changes to get to the final version. We've got about six of those Stretch prototypes operating now. In parallel, our hardware team is finishing up the design of the productized version of Stretch. That version of Stretch looks a lot like the prototype, but every component has been redesigned from the ground up to be manufacturable, to be reliable, and to be higher performance.

For the productized version of Stretch, we’ll build up the first units this summer, and then it’ll go on sale next year. So this is kind of a sneak peak into what the final product will be.

How much does it cost, and will you be selling Stretch, or offering it as a service?

We’re not quite ready to talk about cost yet, but it’ll be cost effective, and similar in cost to existing systems if you were to combine an industrial robot arm, custom gripper, and mobile base. We’re considering both selling and leasing as a service, but we’re not quite ready to narrow it down yet.

Photo: Boston Dynamics

As with all mobile manipulators, what Stretch can do long-term is constrained far more by software than by hardware. With a fast and powerful arm, a mobile base, a solid perception system, and 16 hours of battery life, you can imagine how different grippers could enable all kinds of different capabilities. But we’re getting ahead of ourselves, because it’s a long, long way from getting a prototype to work pretty well to getting robots into warehouses in a way that’s commercially viable long-term, even when the use case is as clear as it seems to be for Stretch.

Stretch also could signal a significant shift in focus for Boston Dynamics. While Blankespoor’s comments about Stretch leveraging Boston Dynamics’ expertise with robots like Spot and Atlas are well taken, Stretch is arguably the most traditional robot that the company has designed, and they’ve done so specifically to be able to sell robots into industry. This is what you do if you’re a robotics company who wants to make money by selling robots commercially, which (historically) has not been what Boston Dynamics is all about. Despite its bonkers valuation, Boston Dynamics ultimately needs to make money, and robots like Stretch are a good way to do it. With that in mind, I wouldn’t be surprised to see more robots like this from Boston Dynamics—robots that leverage the company’s unique technology, but that are designed to do commercially useful tasks in a somewhat less flashy way. And if this strategy keeps Boston Dynamics around (while funding some occasional creative craziness), then I’m all for it. Continue reading

Posted in Human Robots

#439042 How Scientists Used Ultrasound to Read ...

Thanks to neural implants, mind reading is no longer science fiction.

As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!

Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.

Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.

In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.

“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.

To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.

Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.

Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.

“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.

There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?

One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.

The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.

The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.

Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.

The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.

Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.

That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.

Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.

A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.

There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.

All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.

“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.

Image Credit: Free-Photos / Pixabay Continue reading

Posted in Human Robots

#439006 Low-Cost Drones Learn Precise Control ...

I’ll admit to having been somewhat skeptical about the strategy of dangling payloads on long tethers for drone delivery. I mean, I get why Wing does it— it keeps the drone and all of its spinny bits well away from untrained users while preserving the capability of making deliveries to very specific areas that may have nearby obstacles. But it also seems like you’re adding some risk as well, because once your payload is out on that long tether, it’s more or less out of your control in at least two axes. And you can forget about your drone doing anything while this is going on, because who the heck knows what’s going to happen to your payload if the drone starts moving around?

NYU roboticists, that’s who.

This research is by Guanrui Li, Alex Tunchez, and Giuseppe Loianno at the Agile Robotics and Perception Lab (ARPL) at NYU. As you can see from the video, the drone makes keeping rock-solid control over that suspended payload look easy, but it’s very much not, especially considering that everything you see is running onboard the drone itself at 500Hz— all it takes is an IMU and a downward-facing monocular camera, along with the drone’s Snapdragon processor.

To get this to work, the drone has to be thinking about two things. First, there’s state estimation, which is the behavior of the drone itself along with its payload at the end of the tether. The drone figures this out by watching how the payload moves using its camera and tracking its own movement with its IMU. Second, there’s predicting what the payload is going to do next, and how that jibes (or not) with what the drone wants to do next. The researchers developed a model predictive control (MPC) system for this, with some added perception constraints to make sure that the behavior of the drone keeps the payload in view of the camera.

At the moment, the top speed of the system is 4 m/s, but it sounds like rather than increasing the speed of a single payload-swinging drone, the next steps will be to make the overall system more complicated by somehow using multiple drones to cooperatively manage tethered payloads that are too big or heavy for one drone to handle alone.

For more on this, we spoke with Giuseppe Loianno, head of the ARPL.

IEEE Spectrum: We've seen some examples of delivery drones delivering suspended loads. How will this work improve their capabilities?

Giuseppe Loianno: For the first time, we jointly design a perception-constrained model predictive control and state estimation approaches to enable the autonomy of a quadrotor with a cable suspended payload using onboard sensing and computation. The proposed control method guarantees the visibility of the payload in the robot camera as well as the respect of the system dynamics and actuator constraints. These are critical design aspects to guarantee safety and resilience for such a complex and delicate task involving transportation of objects.

The additional challenge involves the fact that we aim to solve the aforementioned problem using a minimal sensor suite for autonomous navigation made by a single camera and IMU. This is an ambitious goal since it concurrently involves estimating the load and the vehicle states. Previous approaches leverage GPS or motion capture systems for state estimation and do not consider the perception and physical constraints when solving the problem. We are confident that our solution will contribute to making a reality the autonomous delivery process in warehouses or in dense urban areas where the GPS signal is currently absent or shadowed.

Will it make a difference to delivery systems that use an actuated cable and only leave the load suspended for the delivery itself?

This is certainly an interesting question. We believe that adding an actuated cable will introduce more disadvantages than benefits. Certainly, an actuated cable can be leveraged to compensate for cable's swinging motions in windy conditions and/or increase the delivery precision. However, the introduction of additional actuated mechanisms and components come at the price of an increased system mass and inertia. This will reduce the overall flight time and the vehicle’s agility as well as the system resilience with respect to the transportation task. Finally, active mechanisms are also more difficult to design compared to passive ones.

What's challenging about doing all of this on-vehicle?

There are several challenges to solve on-board this problem. First, it is very difficult to concurrently run perception and action on such computationally constrained platforms in real-time. Second, the first aspect becomes even more challenging if we consider as in our case a perception-based constrained receding horizon control problem that aims to guarantee the visibility of the payload during the motion, while concurrently respecting all the system physical and sensing limitations. Finally, it has been challenging to run the entire system at a high rate to fully unleash the system’s agility. We are currently able to reach rates of 500 Hz.

Can your method adapt to loads of varying shapes, sizes, and masses? What about aerodynamics or flying in wind?

Technically, our approach can easily be adapted to varying objects sizes and masses. Our previous contributions have already shown the ability to estimate online changes in the vehicle/load configuration and can potentially be used to operate the proposed system in dynamic conditions, where the load’s characteristics are unknown and/or may vary across consecutive flights. This can be useful for both package delivery or warehouse operations, where different types of objects need to be transported or manipulated.

The aerodynamics problem is a great point. Overall, our past work has investigated the aerodynamics of wind disturbances for a single robot without a load. Formulating these problems for the proposed system is challenging and is still an open research question. We have some ideas to approach this problem combining Bayesian estimation techniques with more recent machine learning approaches and we will tackle it in the near future.

What are the limitations on the performance of the system? How fast and agile can it be with a suspended payload?

The limits of the performances are established by the actuating and sensing system. Our approach intrinsically considers both physical and sensing limitations of our system. From a sensing and computation perspective, we believe to be close to the limits with speeds of up to 4 m/s. Faster speeds can potentially introduce motion blur while decreasing the load tracking precision. Moreover, faster motions will increase as well aerodynamic disturbances that we have just mentioned. In the future, modeling these phenomena and their incorporation in the proposed solution can further push the agility.

Your paper talks about extending this approach to multiple vehicles cooperatively transporting a payload, can you tell us more about that?

We are currently working on a distributed perception and control approach for cooperative transportation. We already have some very exciting results that we will share with you very soon! Overall, we can employ a team of aerial robots to cooperatively transport a payload to increase the payload capacity and endow the system with additional resilience in case of vehicles’ failures. A cooperative cable suspended payload cooperative transportation system allows as well to concurrently and independently control the load’s position and orientation. This is not possible just using rigid connections. We believe that our approach will have a strong impact in real-world settings for delivery and constructions in warehouses and GPS-denied environments such as dense urban areas. Moreover, in post disaster scenarios, a team of physically interconnected aerial robots can deliver supplies and establish communication in areas where GPS signal is intermittent or unavailable.

PCMPC: Perception-Constrained Model Predictive Control for Quadrotors with Suspended Loads using a Single Camera and IMU, by Guanrui Li, Alex Tunchez, and Giuseppe Loianno from NYU, will be presented (virtually) at ICRA 2021.

<Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#438731 Video Friday: Perseverance Lands on Mars

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

Hmm, did anything interesting happen in robotics yesterday week?

Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:

[ Mars 2020 ]

PLEN hopes you had a happy Valentine's Day!

[ PLEN ]

Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.

[ Unitree ]

Thanks Xingxing!

Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.

[ Spy in the Wild ]

Tails, it turns out, are useful for almost everything.

[ DART Lab ]

Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.

[ Extend Robotics ]

Fotokite is a safe, practical way to do local surveillance with a drone.

I just wish they still had a consumer version 🙁

[ Fotokite ]

How to confuse fish.

[ Harvard ]

Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.

[ Army ]

Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!

[ Flexiv ]

Thanks Yunfan!

An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.

[ Paper ]

Thanks Poramate!

A short film about Robot Wars from Blender Magazine in 1995.

[ YouTube ]

While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.

[ ETHZ ]

Thanks Markus!

Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area

[ IFRR ]

Thanks Fan!

Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.

[ IFRR ]

At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

[ UPenn ] Continue reading

Posted in Human Robots

#438553 New Drone Software Handles Motor ...

Good as some drones are becoming at obstacle avoidance, accidents do still happen. And as far as robots go, drones are very much on the fragile side of things. Any sort of significant contact between a drone and almost anything else usually results in a catastrophic, out-of-control spin followed by a death plunge to the ground. Bad times. Bad, expensive times.

A few years ago, we saw some interesting research into software that can keep the most common drone form factor, the quadrotor, aloft and controllable even after the failure of one motor. The big caveat to that software was that it relied on GPS for state estimation, meaning that without a GPS signal, the drone is unable to get the information it needs to keep itself under control. In a paper recently accepted to RA-L, researchers at the University of Zurich report that they have developed a vision-based system that brings state estimation completely on-board. The upshot: potentially any drone with some software and a camera can keep itself safe even under the most challenging conditions.

A few years ago, we wrote about first author Sihao Sun’s work on high speed controlled flight of a quadrotor with a non-functional motor. But that innovation relied on an external motion capture system. Since then, Sun has moved from Tu Delft to Davide Scaramuzza’s lab at UZH, and it looks like he’s been able to combine his work on controlled spinning flight with the Robotics and Perception Group’s expertise in vision. Now, a downward-facing camera is all it takes for a spinning drone to remain stable and controllable:

Remember, this software isn’t just about guarding against motor failure. Drone motors themselves don’t just up and fail all that often, either with respect to their software or hardware. But they do represent the most likely point of failure for any drone, usually because when you run into something, what ultimately causes your drone to crash is damage to a motor or a propeller that causes loss of control.

The reason that earlier solutions relied on GPS was because the spinning drone needs a method of state estimation—that is, in order to be closed-loop controllable, the drone needs to have a reasonable understanding of what its position is and how that position is changing over time. GPS is an easy way to take care of this, but GPS is also an external system that doesn’t work everywhere. Having a state estimation system that’s completely internal to the drone itself is much more fail safe, and Sun got his onboard system to work through visual feature tracking with a downward-facing camera, even as the drone is spinning at over 20 rad/s.

While the system works well enough with a regular downward-facing camera—something that many consumer drones are equipped with for stabilization purposes—replacing it with an event camera (you remember event cameras, right?) makes the performance even better, especially in low light.

For more details on this, including what you’re supposed to do with a rapidly spinning partially disabled quadrotor (as well as what it’ll take to make this a standard feature on consumer hardware), we spoke with Sihao Sun via email.

IEEE Spectrum: what usually happens when a drone spinning this fast lands? Is there any way to do it safely?

Sihao Sun: Our experience shows that we can safely land the drone while it is spinning. When the range sensor measurements are lower than a threshold (around 10 cm, indicating that the drone is close to the ground), we switch off the rotors. During the landing procedure, despite the fast spinning motion, the thrust direction oscillates around the gravity vector, thus the drone touches the ground with its legs without damaging other components.

Can your system handle more than one motor failure?

Yes, the system can also handle the failure of two opposing rotors. However, if two adjacent rotors or more than two rotors fail, our method cannot save the quadrotor. Some research has shown that it is possible to control a quadrotor with only one remaining rotor. But the drone requires a very special inertial property, which is hard to satisfy in real applications.

How different is your system's performance from a similar system that relies on GPS, in a favorable environment?

In a favorable environment, our system outperforms those relying on GPS signals because it obtains better position estimates. Since a damaged quadrotor spins fast, the accelerometer readings are largely affected by centrifugal forces. When the GPS signal is lost or degraded, a drone relying on GPS needs to integrate these biased accelerometer measurements for position estimation, leading to large position estimation errors. Feeding these erroneous estimates to the flight controller can easily crash the drone.

When you say that your solution requires “only onboard sensors and computation,” are those requirements specialized, or would they be generally compatible with the current generation of recreational and commercial quadrotors?

We use an NVIDIA Jetson TX2 to run our solution, which includes two parts: the control algorithm and the vision-based state estimation algorithm. The control algorithm is lightweight; thus, we believe that it is compatible with the current generation of quadrotors. On the other hand, the vision-based state estimation requires relatively more computational resources, which may not be affordable for cheap recreational platforms. But this is not an issue for commercial quadrotors because many of them have more powerful processors than a TX2.

What else can event cameras be used for, in recreational or commercial applications?

Many drone applications can benefit from event cameras, especially those in high-speed or low-light conditions, such as autonomous drone racing, cave exploration, drone delivery during night time, etc. Event cameras also consume very little power, which is a significant advantage for energy-critical missions, such as planetary aerial vehicles for Mars explorations. Regarding space applications, we are currently collaborating with JPL to explore the use of event cameras to address the key limitations of standard cameras for the next Mars helicopter.

[ UZH RPG ] Continue reading

Posted in Human Robots