Tag Archives: come

#439095 DARPA Prepares for the Subterranean ...

The DARPA Subterranean Challenge Final Event is scheduled to take place at the Louisville Mega Cavern in Louisville, Kentucky, from September 21 to 23. We’ve followed SubT teams as they’ve explored their way through abandoned mines, unfinished nuclear reactors, and a variety of caves, and now everything comes together in one final course where the winner of the Systems Track will take home the $2 million first prize.

It’s a fitting reward for teams that have been solving some of the hardest problems in robotics, but winning isn’t going to be easy, and we’ll talk with SubT Program Manager Tim Chung about what we have to look forward to.

Since we haven’t talked about SubT in a little while (what with the unfortunate covid-related cancellation of the Systems Track Cave Circuit), here’s a quick refresher of where we are: the teams have made it through the Tunnel Circuit, the Urban Circuit, and a virtual version of the Cave Circuit, and some of them have been testing in caves of their own. The Final Event will include all of these environments, and the teams of robots will have 60 minutes to autonomously map the course, locating artifacts to score points. Since I’m not sure where on Earth there’s an underground location that combines tunnels and caves with urban structures, DARPA is going to have to get creative, and the location in which they’ve chosen to do that is Louisville, Kentucky.

The Louisville Mega Cavern is a former limestone mine, most of which is under the Louisville Zoo. It’s not all that deep, mostly less than 30 meters under the surface, but it’s enormous: with 370,000 square meters of rooms and passages, the cavern currently hosts (among other things) a business park, a zipline course, and mountain bike trails, because why not. While DARPA is keeping pretty quiet on the details, I’m guessing that they’ll be taking over a chunk of the cavern and filling it with features representing as many of the environmental challenges as they can.

To learn more about how the SubT Final Event is going to go, we spoke with SubT Program Manager Tim Chung. But first, we talked about Tim’s perspective on the success of the Urban Circuit, and how teams have been managing without an in-person Cave Circuit.

IEEE Spectrum: How did the SubT Urban Circuit go?

Tim Chung: On a couple fronts, Urban Circuit was really exciting. We were in this unfinished nuclear power plant—I’d be surprised if any of the competitors had prior experience in such a facility, or anything like it. I think that was illuminating both from an experiential point of view for the competitors, but also from a technology point of view, too.

One thing that I thought was really interesting was that we, DARPA, didn't need to make the venue more challenging. The real world is really that hard. There are places that were just really heinous for these robots to have to navigate through in order to look in every nook and cranny for artifacts. There were corners and doorways and small corridors and all these kind of things that really forced the teams to have to work hard, and the feedback was, why did DARPA have to make it so hard? But we didn’t, and in fact there were places that for the safety of the robots and personnel, we had to ensure the robots couldn’t go.

It sounds like some teams thought this course was on the more difficult side—do you think you tuned it to just the right amount of DARPA-hard?

Our calibration worked quite well. We were able to tease out and help refine and better understand what technologies are both useful and critical and also those technologies that might not necessarily get you the leap ahead capability. So as an example, the Urban Circuit really emphasized verticality, where you have to be able to sense, understand, and maneuver in three dimensions. Being able to capitalize on their robot technologies to address that verticality really stratified the teams, and showed how critical those capabilities are.

We saw teams that brought a lot of those capabilities do very well, and teams that brought baseline capabilities do what they could on the single floor that they were able to operate on. And so I think we got the Goldilocks solution for Urban Circuit that combined both difficulty and ambition.

Photos: Evan Ackerman/IEEE Spectrum

Two SubT Teams embedded networking equipment in balls that they could throw onto the course.

One of the things that I found interesting was that two teams independently came up with throwable network nodes. What was DARPA’s reaction to this? Is any solution a good solution, or was it more like the teams were trying to game the system?

You mean, do we want teams to game the rules in any way so as to get a competitive advantage? I don't think that's what the teams were doing. I think they were operating not only within the bounds of the rules, which permitted such a thing as throwable sensors where you could stand at the line and see how far you could chuck these things—not only was that acceptable by the rules, but anticipated. Behind the scenes, we tried to do exactly what these teams are doing and think through different approaches, so we explicitly didn't forbid such things in our rules because we thought it's important to have as wide an aperture as possible.

With these comms nodes specifically, I think they’re pretty clever. They were in some cases hacked together with a variety of different sports paraphernalia to see what would provide the best cushioning. You know, a lot of that happens in the field, and what it captured was that sometimes you just need to be up at two in the morning and thinking about things in a slightly different way, and that's when some nuggets of innovation can arise, and we see this all the time with operators in the field as well. They might only have duct tape or Styrofoam or whatever the case may be and that's when they come up with different ways to solve these problems. I think from DARPA’s perspective, and certainly from my perspective, wherever innovation can strike, we want to try to encourage and inspire those opportunities. I thought it was great, and it’s all part of the challenge.

Is there anything you can tell us about what your original plan had been for the Cave Circuit?

I can say that we’ve had the opportunity to go through a number of these caves scattered all throughout the country, and engage with caving communities—cavers clubs, speleologists that conduct research, and then of course the cave rescue community. The single biggest takeaway
is that every cave, and there are tens of thousands of them in the US alone, every cave has its own personality, and a lot of that personality is quite hidden from humans, because we can’t explore or access all of the cave. This led us to a number of different caves that were intriguing from a DARPA perspective but also inspirational for our Cave Circuit Virtual Competition.

How do you feel like the tuning was for the Virtual Cave Circuit?

The Virtual Competition, as you well know, was exciting in the sense that we could basically combine eight worlds into one competition, whereas the systems track competition really didn’t give us that opportunity. Even if we were able have held the Cave Circuit Systems Competition in person, it would have been at one site, and it would have been challenging to represent the level of diversity that we could with the Virtual Competition. So I think from that perspective, it’s clearly an advantage in terms of calibration—diversity gets you the ability to aggregate results to capture those that excel across all worlds as well as those that do well in one world or some worlds and not the others. I think the calibration was great in the sense that we were able to see the gamut of performance. Those that did well, did quite well, and those that have room to grow showed where those opportunities are for them as well.

We had to find ways to capture that diversity and that representativeness, and I think one of the fun ways we did that was with the different cave world tiles that we were able to combine in a variety of different ways. We also made use of a real world data set that we were able to take from a laser scan. Across the board, we had a really great chance to illustrate why virtual testing and simulation still plays such a dominant role in robotics technology development, and why I think it will continue to play an increasing role for developing these types of autonomy solutions.

Photo: Team CSIRO Data 61

How can systems track teams learn from their testing in whatever cave is local to them and effectively apply that to whatever cave environment is part of the final considering what the diversity of caves is?

I think that hits the nail on the head for what we as technologists are trying to discover—what are the transferable generalizable insights and how does that inform our technology development? As roboticists we want to optimize our systems to perform well at the tasks that they were designed to do, and oftentimes that means specialization because we get increased performance at the expense of being a generalist robot. I think in the case of SubT, we want to have our cake and eat it too—we want robots that perform well and reliably, but we want them to do so not just in one environment, which is how we tend to think about robot performance, but we want them to operate well in many environments, many of which have yet to be faced.

And I think that's kind of the nuance here, that we want robot systems to be generalists for the sake of being able to handle the unknown, namely the real world, but still achieve a high level of performance and perhaps they do that to their combined use of different technologies or advances in autonomy or perception approaches or novel mechanisms or mobility, but somehow they're still able, at least in aggregate, to achieve high performance.

We know these teams eagerly await any type of clue that DARPA can provide like about the SubT environments. From the environment previews for Tunnel, Urban, and even Cave, the teams were pivoting around and thinking a little bit differently. The takeaway, however, was that they didn't go to a clean sheet design—their systems were flexible enough that they could incorporate some of those specialist trends while still maintaining the notion of a generalist framework.

Looking ahead to the SubT Final, what can you tell us about the Louisville Mega Cavern?

As always, I’ll keep you in suspense until we get you there, but I can say that from the beginning of the SubT Challenge we had always envisioned teams of robots that are able to address not only the uncertainty of what's right in front of them, but also the uncertainty of what comes next. So I think the teams will be advantaged by thinking through subdomain awareness, or domain awareness if you want to generalize it, whether that means tuning multi-purpose robots, or deploying different robots, or employing your team of robots differently. Knowing which subdomain you are in is likely to be helpful, because then you can take advantage of those unique lessons learned through all those previous experiences then capitalize on that.

As far as specifics, I think the Mega Cavern offers many of the features important to what it means to be underground, while giving DARPA a pretty blank canvas to realize our vision of the SubT Challenge.

The SubT Final will be different from the earlier circuits in that there’s just one 60-minute run, rather than two. This is going to make things a lot more stressful for teams who have experienced bad robot days—why do it this way?

The preliminary round has two 30-minute runs, and those two runs are very similar to how we have done it during the circuits, of a single run per configuration per course. Teams will have the opportunity to show that their systems can face the obstacles in the final course, and it's the sum of those scores much like we did during the circuits, to help mitigate some of the concerns that you mentioned of having one robot somehow ruin their chances at a prize.

The prize round does give DARPA as well as the community a chance to focus on the top six teams from the preliminary round, and allows us to understand how they came to be at the top of the pack while emphasizing their technological contributions. The prize round will be one and done, but all of these teams we anticipate will be putting their best robot forward and will show the world why they deserve to win the SubT Challenge.

We’ve always thought that when called upon these robots need to operate in really challenging environments, and in the context of real world operations, there is no second chance. I don't think it's actually that much of a departure from our interests and insistence on bringing reliable technologies to the field, and those teams that might have something break here and there, that's all part of the challenge, of being resilient. Many teams struggled with robots that were debilitated on the course, and they still found ways to succeed and overcome that in the field, so maybe the rules emphasize that desire for showing up and working on game day which is consistent, I think, with how we've always envisioned it. This isn’t to say that these systems have to work perfectly, they just have to work in a way such that the team is resilient enough to tackle anything that they face.

It’s not too late for teams to enter for both the Virtual Track and the Systems Track to compete in the SubT Final, right?

Yes, that's absolutely right. Qualifications are still open, we are eager to welcome new teams to join in along with our existing competitors. I think any dark horse competitors coming into the Finals may be able to bring something that we haven't seen before, and that would be really exciting. I think it'll really make for an incredibly vibrant and illuminating final event.

The final event qualification deadline for the Systems Competition is April 21, and the qualification deadline for the Virtual Competition is June 29. More details here. Continue reading

Posted in Human Robots

#439089 Ingenuity’s Chief Pilot Explains How ...

On April 11, the Mars helicopter Ingenuity will take to the skies of Mars for the first time. It will do so fully autonomously, out of necessity—the time delay between Ingenuity’s pilots at the Jet Propulsion Laboratory and Jezero Crater on Mars makes manual or even supervisory control impossible. So the best that the folks at JPL can do is practice as much as they can in simulation, and then hope that the helicopter can handle everything on its own.

Here on Earth, simulation is a critical tool for many robotics applications, because it doesn’t rely on access to expensive hardware, is non-destructive, and can be run in parallel and at faster-than-real-time speeds to focus on solving specific problems. Once you think you’ve gotten everything figured out in simulation, you can always give it a try on the real robot and see how close you came. If it works in real life, great! And if not, well, you can tweak some stuff in the simulation and try again.

For the Mars helicopter, simulation is much more important, and much higher stakes. Testing the Mars helicopter under conditions matching what it’ll find on Mars is not physically possible on Earth. JPL has flown engineering models in Martian atmospheric conditions, and they’ve used an actuated tether to mimic Mars gravity, but there’s just no way to know what it’ll be like flying on Mars until they’ve actually flown on Mars. With that in mind, the Ingenuity team has been relying heavily on simulation, since that’s one of the best tools they have to prepare for their Martian flights. We talk with Ingenuity’s Chief Pilot, Håvard Grip, to learn how it all works.

Ingenuity Facts:
Body Size: a box of tissues

Brains: Qualcomm Snapdragon 801

Weight: 1.8 kilograms

Propulsion: Two 1.2m carbon fiber rotors

Navigation sensors: VGA camera, laser altimeter, inclinometer

Ingenuity is scheduled to make its first flight no earlier than April 11. Before liftoff, the Ingenuity team will conduct a variety of pre-flight checks, including verifying the responsiveness of the control system and spinning the blades up to full speed (2,537 rpm) without lifting off. If everything looks good, the first flight will consist of a 1 meter per second climb to 3 meters, 30 seconds of hover at 3 meters while rotating in place a bit, and then a descent to landing. If Ingenuity pulls this off, that will have made its entire mission a success. There will be more flights over the next few weeks, but all it takes is one to prove that autonomous helicopter flight on Mars is possible.

Last month, we spoke with Mars Helicopter Operations Lead Tim Canham about Ingenuity’s hardware, software, and autonomy, but we wanted to know more about how the Ingenuity team has been using simulation for everything from vehicle design to flight planning. To answer our questions, we talked with JPL’s Håvard Grip, who led the development of Ingenuity’s navigation and flight control systems. Grip also has the title of Ingenuity Chief Pilot, which is pretty awesome. He summarizes this role as “operating the flight control system to make the helicopter do what we want it to do.”

IEEE Spectrum: Can you tell me about the simulation environment that JPL uses for Ingenuity’s flight planning?

Håvard Grip: We developed a Mars helicopter simulation ourselves at JPL, based on a multi-body simulation framework that’s also developed at JPL, called DARTS/DSHELL. That's a system that has been in development at JPL for about 30 years now, and it's been used in a number of missions. And so we took that multibody simulation framework, and based on it we built our own Mars helicopter simulation, put together our own rotor model, our own aerodynamics models, and everything else that's needed in order to simulate a helicopter. We also had a lot of help from the rotorcraft experts at NASA Ames and NASA Langley.

Image: NASA/JPL

Ingenuity in JPL’s flight simulator.

Without being able to test on Mars, how much validation are you able to do of what you’re seeing in simulation?

We can do a fair amount, but it requires a lot of planning. When we made our first real prototype (with a full-size rotor that looked like what we were thinking of putting on Mars) we first spent a lot of time designing it and using simulation tools to guide that design, and when we were sufficiently confident that we were close enough, and that we understood enough about it, then we actually built the thing and designed a whole suite of tests in a vacuum chamber where where we could replicate Mars atmospheric conditions. And those tests were before we tried to fly the helicopter—they were specifically targeted at what we call system identification, which has to do with figuring out what the true properties, the true dynamics of a system are, compared to what we assumed in our models. So then we got to see how well our models did, and in the places where they needed adjustment, we could go back and do that.

The simulation work that we really started after that very first initial lift test, that’s what allowed us to unlock all of the secrets to building a helicopter that can fly on Mars.
—Håvard Grip, Ingenuity Chief Pilot

We did a lot of this kind of testing. It was a big campaign, in several stages. But there are of course things that you can't fully replicate, and you do depend on simulation to tie things together. For example, we can't truly replicate Martian gravity on Earth. We can replicate the atmosphere, but not the gravity, and so we have to do various things when we fly—either make the helicopter very light, or we have to help it a little bit by pulling up on it with a string to offload some of the weight. These things don't fully replicate what it will be like on Mars. We also can't simultaneously replicate the Mars aerodynamic environment and the physical and visual surroundings that the helicopter will be flying in. These are places where simulation tools definitely come in handy, with the ability to do full flight tests from A to B, with the helicopter taking off from the ground, running the flight software that it will be running on board, simulating the images that the navigation camera takes of the ground below as it flies, feeding that back into the flight software, and then controlling it.

To what extent can simulation really compensate for the kinds of physical testing that you can’t do on Earth?

It gives you a few different possibilities. We can take certain tests on Earth where we replicate key elements of the environment, like the atmosphere or the visual surroundings for example, and you can validate your simulation on those parameters that you can test on Earth. Then, you can combine those things in simulation, which gives you the ability to set up arbitrary scenarios and do lots and lots of tests. We can Monte Carlo things, we can do a flight a thousand times in a row, with small perturbations of various parameters and tease out what our sensitivities are to those things. And those are the kinds of things that you can't do with physical tests, both because you can't fully replicate the environment and also because of the resources that would be required to do the same thing a thousand times in a row.

Because there are limits to the physical testing we can do on Earth, there are elements where we know there's more uncertainty. On those aspects where the uncertainty is high, we tried to build in enough margin that we can handle a range of things. And simulation gives you the ability to then maybe play with those parameters, and put them at their outer limits, and test them beyond where the real parameters are going to be to make sure that you have robustness even in those extreme cases.

How do you make sure you’re not relying on simulation too much, especially since in some ways it’s your only option?

It’s about anchoring it in real data, and we’ve done a lot of that with our physical testing. I think what you’re referring to is making your simulation too perfect, and we’re careful to model the things that matter. For example, the simulated sensors that we use have realistic levels of simulated noise and bias in them, the navigation camera images have realistic levels of degradation, we have realistic disturbances from wind gusts. If you don’t properly account for those things, then you’re missing important details. So, we try to be as accurate as we can, and to capture that by overbounding in areas where we have a high degree of uncertainty.

What kinds of simulated challenges have you put the Mars helicopter through, and how do you decide how far to push those challenges?

One example is that we can simulate going over rougher terrain. We can push that, and see how far we can go and still have the helicopter behave the way that we want it to. Or we can inject levels of noise that maybe the real sensors don't see, but you want to just see how far you can push things and make sure that it's still robust.

Where we put the limits on this and what we consider to be realistic is often a challenge. We consider this on a case by case basis—if you have a sensor that you're dealing with, you try to do testing with it to characterize it and understand its performance as much as possible, and you build a level of confidence in it that allows you to find the proper balance.

When it comes to things like terrain roughness, it's a little bit of a different thing, because we're actually picking where we're flying the helicopter. We have made that choice, and we know what the terrain looks like around us, so we don’t have to wonder about that anymore.

Image: NASA/JPL-Caltech/University of Arizona

Satellite image of the Ingenuity flight area.

The way that we’re trying to approach this operationally is that we should be done with the engineering at this point. We’re not depending on going back and resimulating things, other than a few checks here and there.

Are there any examples of things you learned as part of the simulation process that resulted in changes to the hardware or mission?

You know, it’s been a journey. One of the early things that we discovered as part of modeling the helicopter was that the rotor dynamics were quite different for a helicopter on Mars, in particular with respect to how the rotor responds to the up and down bending of the blades because they’re not perfectly rigid. That motion is a very important influence on the overall flight dynamics of the helicopter, and what we discovered as we started modeling was that this motion is damped much less on Mars. Under-damped oscillatory things like that, you kind of figure might pose a control issue, and that is the case here: if you just naively design it as you might a helicopter on Earth, without taking this into account, you could have a system where the response to control inputs becomes very sluggish. So that required changes to the vehicle design from some of the very early concepts, and it led us to make a rotor that’s extremely light and rigid.

The design cycle for the Mars helicopter—it’s not like we could just build something and take it out to the back yard and try it and then come back and tweak it if it doesn’t work. It’s a much bigger effort to build something and develop a test program where you have to use a vacuum chamber to test it. So you really want to get as close as possible up front, on your first iteration, and not have to go back to the drawing board on the basic things.

So how close were you able to get on your first iteration of the helicopter design?

[This video shows] a very early demo which was done more or less just assuming that things were going to behave as they would on Earth, and that we’d be able to fly in a Martian atmosphere just spinning the rotor faster and having a very light helicopter. We were basically just trying to demonstrate that we could produce enough lift. You can see the helicopter hopping around, with someone trying to joystick it, but it turned out to be very hard to control. This was prior to doing any of the modeling that I talked about earlier. But once we started seriously focusing on the modeling and simulation, we then went on to build a prototype vehicle which had a full-size rotor that’s very close to the rotor that will be flying on Mars. One difference is that prototype had cyclic control only on the lower rotor, and later we added cyclic control on the upper rotor as well, and that decision was informed in large part by the work we did in simulation—we’d put in the kinds of disturbances that we thought we might see on Mars, and decided that we needed to have the extra control authority.

How much room do you think there is for improvement in simulation, and how could that help you in the future?

The tools that we have were definitely sufficient for doing the job that we needed to do in terms of building a helicopter that can fly on Mars. But simulation is a compute-intensive thing, and so I think there’s definitely room for higher fidelity simulation if you have the compute power to do so. For a future Mars helicopter, you could get some benefits by more closely coupling together high-fidelity aerodynamic models with larger multi-body models, and doing that in a fast way, where you can iterate quickly. There’s certainly more potential for optimizing things.

Photo: NASA/JPL-Caltech

Ingenuity preparing for flight.

Watching Ingenuity’s first flight take place will likely be much like watching the Perseverance landing—we’ll be able to follow along with the Ingenuity team while they send commands to the helicopter and receive data back, although the time delay will mean that any kind of direct control won’t be possible. If everything goes the way it’s supposed to, there will hopefully be some preliminary telemetry from Ingenuity saying so, but it sounds like we’ll likely have to wait until April 12 before we get pictures or video of the flight itself.

Because Mars doesn’t care what time it is on Earth, the flight will actually be taking place very early on April 12, with the JPL Mission Control livestream starting at 3:30 a.m. EDT (12:30 a.m. PDT). Details are here. Continue reading

Posted in Human Robots

#439073 There’s a ‘New’ Nirvana Song Out, ...

One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.

Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.

The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.

The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.

Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.

The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.

To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.

Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”

Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.

The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.

How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”

Image Credit: Edward Xu on Unsplash Continue reading

Posted in Human Robots

#439062 Xenobots 2.0: These Living Robots ...

The line between animals and machines was already getting blurry after a team of scientists and roboticists unveiled the first living robots last year. Now the same team has released version 2.0 of their so-called xenobots, and they’re faster, stronger, and more capable than ever.

In January 2020, researchers from Tufts University and the University of Vermont laid out a method for building tiny biological machines out of the eggs of the African claw frog Xenopus laevis. Dubbed xenobots after their animal forebear, they could move independently, push objects, and even team up to create swarms.

Remarkably, building them involved no genetic engineering. Instead, the team used an evolutionary algorithm running on a supercomputer to test out thousands of potential designs made up of different configurations of cells.

Once they’d found some promising candidates that could solve the tasks they were interested in, they used microsurgical tools to build real-world versions out of living cells. The most promising design was built by splicing heart muscle cells (which could contract to propel the xenobots), and skin cells (which provided a rigid support).

Impressive as that might sound, having to build each individual xenobot by hand is obviously tedious. But now the team has devised a new approach that works from the bottom up by getting the xenobots to self-assemble their bodies from single cells. Not only is the approach more scalable, the new xenobots are faster, live longer, and even have a rudimentary memory.

In a paper in Science Robotics, the researchers describe how they took stem cells from frog embryos and allowed them to grow into clumps of several thousand cells called spheroids. After a few days, the stem cells had turned into skin cells covered in small hair-like projections called cilia, which wriggle back and forth.

Normally, these structures are used to spread mucus around on the frog’s skin. But when divorced from their normal context they took on a function more similar to that seen in microorganisms, which use cilia to move about by acting like tiny paddles.

“We are witnessing the remarkable plasticity of cellular collectives, which build a rudimentary new ‘body’ that is quite distinct from their default—in this case, a frog—despite having a completely normal genome,” corresponding author Michael Levin from Tufts University said in a press release.

“We see that cells can re-purpose their genetically encoded hardware, like cilia, for new functions such as locomotion. It is amazing that cells can spontaneously take on new roles and create new body plans and behaviors without long periods of evolutionary selection for those features,” he said.

Not only were the new xenobots faster and longer-lived, they were also much better at tasks like working together as a swarm to gather piles of iron oxide particles. And while the form and function of the xenobots was achieved without any genetic engineering, in an extra experiment the team injected them with RNA that caused them to produce a fluorescent protein that changes color when exposed to a particular color of light.

This allowed the xenobots to record whether they had come into contact with a specific light source while traveling about. The researchers say this is a proof of principle that the xenobots can be imbued with a molecular memory, and future work could allow them to record multiple stimuli and potentially even react to them.

What exactly these xenobots could eventually be used for is still speculative, but they have features that make them a promising alternative to non-organic alternatives. For a start, robots made of stem cells are completely biodegradable and also have their own power source in the form of “yolk platelets” found in all amphibian embryos. They are also able to self-heal in as little as five minutes if cut, and can take advantage of cells’ ability to process all kinds of chemicals.

That suggests they could have applications in everything from therapeutics to environmental engineering. But the researchers also hope to use them to better understand the processes that allow individual cells to combine and work together to create a larger organism, and how these processes might be harnessed and guided for regenerative medicine.

As these animal-machine hybrids advance, they are sure to raise ethical concerns and question marks over the potential risks. But it looks like the future of robotics could be a lot more wet and squishy than we imagined.

Image Credit: Doug Blackiston/Tufts University Continue reading

Posted in Human Robots

#439055 Stretch Is Boston Dynamics’ Take on a ...

Today, Boston Dynamics is announcing Stretch, a mobile robot designed to autonomously move boxes around warehouses. At first glance, you might be wondering why the heck this is a Boston Dynamics robot at all, since the dynamic mobility that we associate with most of their platforms is notably absent. The combination of strength and speed in Stretch’s arm is something we haven’t seen before in a mobile robot, and it’s what makes this a unique and potentially exciting entry into the warehouse robotics space.

Useful mobile manipulation in any environment that’s not almost entirely structured is still a significant challenge in robotics, and it requires a very difficult combination of sensing, intelligence, and dynamic motion, all of which are classic Boston Dynamics. But also classic Boston Dynamics is building really cool platforms, and only later trying to figure out a way of making them commercially viable. So why Stretch, why boxes, why now, and (the real question) why not Handle? We talk with Boston Dynamics’ Vice President of Product Engineering Kevin Blankespoor to find out.

Stretch is very explicitly a box-handling mobile robot for relatively well structured warehouses. It’s in no way designed to be a generalist that many of Boston Dynamics’ other robots are. And to be fair, this is absolutely how to make a robot that’s practical and cost effective right out of the crate: Identify a task that is dull or dirty or dangerous for humans, design a robot to do that task safely and efficiently, and deploy it with the expectation that it’ll be really good at that task but not necessarily much else. This is a very different approach than a robot like Spot, where the platform came first and the practical applications came later—with Stretch, it’s all about that specific task in a specific environment.

There are already robotic solutions for truck unloading, palletizing, and depalletizing, but Stretch seems to be uniquely capable. For truck unloading, the highest performance systems that I’m aware of are monstrous things (here’s one example from Honeywell) that use a ton of custom hardware to just sort of ingest the cargo within a trailer all at once. In a highly structured and predictable warehouse, this sort of thing may pay off over the long term, but it’s going to be extremely expensive and not very versatile at all.

Palletizing and depalletizing robots are much more common in warehouses today. They’re almost always large industrial arms surrounded by a network of custom conveyor belts and whatnot, suffering from the same sorts of constraints as a truck unloader— very capable in some situations, but generally high cost and low flexibility.

Photo: Boston Dynamics

Stretch is probably not going to be able to compete with either of these types of dedicated systems when it comes to sheer speed, but it offers lots of other critical advantages: It’s fast and easy to deploy, easy to use, and adaptable to a variety of different tasks without costly infrastructure changes. It’s also very much not Handle, which was Boston Dynamics’ earlier (although not that much earlier) attempt at a box-handling robot for warehouses, and (let’s be honest here) a much more Boston Dynamics-y thing than Stretch seems to be. To learn more about why the answer is Stretch rather than Handle, and how Stretch will fit into the warehouse of the very near future, we spoke with Kevin Blankespoor, Boston Dynamics’ VP of Product Engineering and chief engineer for both Handle and Stretch.

IEEE Spectrum: Tell me about Stretch!

Kevin Blankespoor: Stretch is the first mobile robot that we’ve designed specifically for the warehouse. It’s all about moving boxes. Stretch is a flexible robot that can move throughout the warehouse and do different tasks. During a typical day in the life of Stretch in the future, it might spend the morning on the inbound side of the warehouse unloading boxes from trucks. It might spend the afternoon in the aisles of the warehouse building up pallets to go to retailers and e-commerce facilities, and it might spend the evening on the outbound side of the warehouse loading boxes into the trucks. So, it really goes to where the work is.

There are already other robots that include truck unloading robots, palletizing and depalletizing robots, and mobile bases with arms on them. What makes Boston Dynamics the right company to introduce a new robot in this space?

We definitely thought through this, because there are already autonomous mobile robots [AMRs] out there. Most of them, though, are more like pallet movers or tote movers—they don't have an arm, and most of them are really just about moving something from point A to point B without manipulation capability. We've seen some experiments where people put arms on AMRs, but nothing that's made it very far in the market. And so when we started looking at Stretch, we realized we really needed to make a custom robot, and that it was something we could do quickly.

“We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.”

Stretch is built with pieces from Spot and Atlas and that gave us a big head start. For example, if you look at Stretch’s vision system, it's 2D cameras, depth sensors, and software that allows it to do obstacle detection, box detection, and localization. Those are all the same sensors and software that we've been using for years on our legged robots. And if you look closely at Stretch’s wrist joints, they're actually the same as Spot’s hips. They use the same electric motors, the same gearboxes, the same sensors, and they even have the same closed-loop controller controlling the joints.

If you were to buy an existing industrial robot arm with this kind of performance, it would be about four times heavier than the arm we built, and it's really hard to make that into a mobile robot. A lot of this came from our leg technology because it’s so important for our leg designs to be lightweight for the robots to balance. We took that same strength to weight advantage that we have, and built it into this arm. We're able to rapidly piece together things from our other robots to get us out of the gate quickly, so even though this looks like a totally different robot, we think we have a good head start going into this market.

At what point did you decide to go with an arm on a statically stable base on Stretch, rather than something more, you know, dynamic-y?

Stretch looks really different than the robots that Boston Dynamics has done in the past. But you'd be surprised how much similarity there is between our legged robots and Stretch under the hood. Looking back, we actually got our start on moving boxes with Atlas, and at that point it was just research and development. We were really trying to do force control for box grasping. We were picking up heavy boxes and maintaining balance and working on those fundamentals. We released a video of that as our first next-gen Atlas video, and it was interesting. We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.

So at this point we actually came up with Handle. The intent of Handle was to do a couple things—one was, we thought we could build a simpler robot that had Atlas’ attributes. Handle has a small footprint so it can fit in tight spaces, but it can pick up heavy boxes. And in addition to that, we had always really wanted to combine wheels and legs. We’d been talking about doing that for a decade and so Handle was a chance for us to try it.

We built a couple versions of Handle, and the first one was really just a prototype to kind of explore the morphology. But the second one was more purpose-built for warehouse tasks, and we started building pallets with that one and it looked pretty good. And then we started doing truck unloading with Handle, which was the pivotal moment. Handle could do it, but it took too long. Every time Handle grasped a box, it would have to roll back and then get to a place where it could spin itself to face forward and place the box, and trucks are very tight for a robot this size, so there's not a lot of room to maneuver. We knew the whole time that there was a robot like Stretch that was another alternative, but that's really when it became clear that Stretch would have a lot of advantages, and we started working on it about a year ago.

Stretch is certainly impressive in a practical way, but I’ll admit to really hoping that something like Handle could have turned out to be a viable warehouse robot.

I love the Handle project as well, and I’m very passionate about that robot. And there was a stage before we built Stretch where we thought, “this would be pretty standard looking compared to Handle, is it going to capture enough of the Boston Dynamics secret sauce?” But when you actually dissect all the problems within Stretch that you have to tackle, there are a lot of cool robotics problems left in there—the vision system, the planning, the manipulation, the grasping of the boxes—it's a lot harder to solve than it looks, and we're excited that we're actually getting fairly far down that road now.

What happens to Handle now?

Stretch has really taken over our team as far as warehouse products go. Handle we still use occasionally as a research robot, but it’s not actively under development. Stretch is really Handle’s descendent. Handle’s not retired, exactly, but we’re just using it for things like the dance video.

There’s still potential to do cool stuff with Handle. I do think that combining wheels with legs is very cool, and largely unexplored compared to its potential. So I still think that you're gonna see versions of robots combining wheels and legs like Handle, and maybe a version of Handle in the future that does more of that. But because we're switching this thread from research into product, Stretch is really the main focus now.

How autonomous is Stretch?

Stretch is semi-autonomous, and that means it really needs to work with people to tap into its full potential. With truck unloading, for example, a person will drive Stretch into the back of the truck and then basically point Stretch in the right direction and say go. And from that point on, everything’s autonomous. Stretch has its vision system and its mobility and it can detect all the boxes, grasp all boxes, and move them onto a conveyor all autonomously. This is something that takes people hours to do manually, and Stretch can go all the way until it gets to the last box, and the truck is empty. There are some parts of the truck unloading task that do require people, like verifying that the truck is in the right place and opening the doors. But this takes a person just a few minutes, and then the robot can spend hours or as long as it takes to do its job autonomously.

There are also other tasks in the warehouse where the autonomy will increase in the future. After truck unloading, the second thing we’ll take on is order building, which will be more in the aisles of a warehouse. For that, Stretch will be navigating around the warehouse, finding the right pallet it needs to take a box from, and loading it onto a new pallet. This will be a different model with more autonomy; you’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.

What kinds of constraints is Stretch operating under? Do the boxes all have to be stacked neatly in the back of the truck, do they have to be the same size, the same color, etc?

“This will be a different model with more autonomy. You’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.”

If you think about manufacturing, where there's been automation for decades, you can go into a modern manufacturing facility and there are robot arms and conveyors and other machines. But if you look at the actual warehouse space, 90+ percent is manually operated, and that's because of what you just asked about— things that are less structured, where there’s more variety, and it's more challenging for a robot. But this is starting to change. This is really, really early days, and you’re going to be seeing a lot more robots in the warehouse space.

The warehouse robotics industry is going to grow a lot over the next decade, and a lot of that boils down to vision—the ability for robots to navigate and to understand what they’re seeing. Actually seeing boxes in real world scenarios is challenging, especially when there's a lot of variety. We've been testing our machine learning-based box detection system on Pick for a few years now, and it's gotten far enough that we know it’s one of the technical hurdles you need to overcome to succeed in the warehouse.

Can you compare the performance of Stretch to the performance of a human in a box-unloading task?

Stretch can move cases up to 50 pounds which is the OSHA limit for how much a single person's allowed to move. The peak case rate for Stretch is 800 cases per hour. You really need to keep up with the flow of goods throughout the warehouse, and 800 cases per hour should be enough for most applications. This is similar to a really good human; most humans are probably slower, and it’s hard for a human to sustain that rate, and one of the big issues with people doing this jobs is injury rates. Imagine moving really heavy boxes all day, and having to reach up high or bend down to get them—injuries are really common in this area. Truck unloading is one of the hardest jobs in a warehouse, and that’s one of the reasons we’re starting there with Stretch.

Is Stretch safe for humans to be around?

We looked at using collaborative robot arms for Stretch, but they don’t have the combination of strength and speed and reach to do this task. That’s partially just due to the laws of physics—if you want to move a 50lb box really fast, that’s a lot of energy there. So, Stretch does need to maintain separation from humans, but it’s pretty safe when it’s operating in the back of a truck.

In the middle of a warehouse, Stretch will have a couple different modes. When it's traveling around it'll be kind of like an AMR, and use a safety-rated lidar making sure that it slows down or stops as people get closer. If it's parked and the arm is moving, it'll do the same thing, monitoring anyone getting close and either slow down or stop.

How do you see Stretch interacting with other warehouse robots?

For building pallet orders, we can do that in a couple of different ways, and we’re experimenting with partners in the AMR space. So you might have an AMR that moves the pallet around and then rendezvous with Stretch, and Stretch does the manipulation part and moves boxes onto the pallet, and then the AMR scuttles off to the next rendezvous point where maybe a different Stretch meets it. We’re developing prototypes of that behavior now with a few partners. Another way to do it is Stretch can actually pull the pallet around itself and do both tasks. There are two fundamental things that happen in the warehouse: there's movement of goods, and there's manipulation of goods, and Stretch can do both.

You’re aware that Hello Robot has a mobile manipulator called Stretch, right?

Great minds think alike! We know Aaron [Edsinger] from the Google days; we all used to be in the same company, and he’s a great guy. We’re in very different applications and spaces, though— Aaron’s robot is going into research and maybe a little bit into the consumer space, while this robot is on a much bigger scale aimed at industrial applications, so I think there’s actually a lot of space between our robots, in terms of how they’ll be used.

Editor’s Note: We did check in with Aaron Edsinger at Hello Robot, and he sees things a little bit differently. “We're disappointed they chose our name for their robot,” Edsinger told us. “We're seriously concerned about it and considering our options.” We sincerely hope that Boston Dynamics and Hello Robot can come to an amicable solution on this.
What’s the timeline for commercial deployment of Stretch?

This is a prototype of the Stretch robot, and anytime we design a new robot, we always like to build a prototype as quickly as possible so we can figure out what works and what doesn't work. We did that with our bipeds and quadrupeds as well. So, we get an early look at what we need to iterate, because any time you build the first thing, it's not the right thing, and you always need to make changes to get to the final version. We've got about six of those Stretch prototypes operating now. In parallel, our hardware team is finishing up the design of the productized version of Stretch. That version of Stretch looks a lot like the prototype, but every component has been redesigned from the ground up to be manufacturable, to be reliable, and to be higher performance.

For the productized version of Stretch, we’ll build up the first units this summer, and then it’ll go on sale next year. So this is kind of a sneak peak into what the final product will be.

How much does it cost, and will you be selling Stretch, or offering it as a service?

We’re not quite ready to talk about cost yet, but it’ll be cost effective, and similar in cost to existing systems if you were to combine an industrial robot arm, custom gripper, and mobile base. We’re considering both selling and leasing as a service, but we’re not quite ready to narrow it down yet.

Photo: Boston Dynamics

As with all mobile manipulators, what Stretch can do long-term is constrained far more by software than by hardware. With a fast and powerful arm, a mobile base, a solid perception system, and 16 hours of battery life, you can imagine how different grippers could enable all kinds of different capabilities. But we’re getting ahead of ourselves, because it’s a long, long way from getting a prototype to work pretty well to getting robots into warehouses in a way that’s commercially viable long-term, even when the use case is as clear as it seems to be for Stretch.

Stretch also could signal a significant shift in focus for Boston Dynamics. While Blankespoor’s comments about Stretch leveraging Boston Dynamics’ expertise with robots like Spot and Atlas are well taken, Stretch is arguably the most traditional robot that the company has designed, and they’ve done so specifically to be able to sell robots into industry. This is what you do if you’re a robotics company who wants to make money by selling robots commercially, which (historically) has not been what Boston Dynamics is all about. Despite its bonkers valuation, Boston Dynamics ultimately needs to make money, and robots like Stretch are a good way to do it. With that in mind, I wouldn’t be surprised to see more robots like this from Boston Dynamics—robots that leverage the company’s unique technology, but that are designed to do commercially useful tasks in a somewhat less flashy way. And if this strategy keeps Boston Dynamics around (while funding some occasional creative craziness), then I’m all for it. Continue reading

Posted in Human Robots