Tag Archives: going

#439509 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots

#439353 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots

#439354 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots

#439095 DARPA Prepares for the Subterranean ...

The DARPA Subterranean Challenge Final Event is scheduled to take place at the Louisville Mega Cavern in Louisville, Kentucky, from September 21 to 23. We’ve followed SubT teams as they’ve explored their way through abandoned mines, unfinished nuclear reactors, and a variety of caves, and now everything comes together in one final course where the winner of the Systems Track will take home the $2 million first prize.

It’s a fitting reward for teams that have been solving some of the hardest problems in robotics, but winning isn’t going to be easy, and we’ll talk with SubT Program Manager Tim Chung about what we have to look forward to.

Since we haven’t talked about SubT in a little while (what with the unfortunate covid-related cancellation of the Systems Track Cave Circuit), here’s a quick refresher of where we are: the teams have made it through the Tunnel Circuit, the Urban Circuit, and a virtual version of the Cave Circuit, and some of them have been testing in caves of their own. The Final Event will include all of these environments, and the teams of robots will have 60 minutes to autonomously map the course, locating artifacts to score points. Since I’m not sure where on Earth there’s an underground location that combines tunnels and caves with urban structures, DARPA is going to have to get creative, and the location in which they’ve chosen to do that is Louisville, Kentucky.

The Louisville Mega Cavern is a former limestone mine, most of which is under the Louisville Zoo. It’s not all that deep, mostly less than 30 meters under the surface, but it’s enormous: with 370,000 square meters of rooms and passages, the cavern currently hosts (among other things) a business park, a zipline course, and mountain bike trails, because why not. While DARPA is keeping pretty quiet on the details, I’m guessing that they’ll be taking over a chunk of the cavern and filling it with features representing as many of the environmental challenges as they can.

To learn more about how the SubT Final Event is going to go, we spoke with SubT Program Manager Tim Chung. But first, we talked about Tim’s perspective on the success of the Urban Circuit, and how teams have been managing without an in-person Cave Circuit.

IEEE Spectrum: How did the SubT Urban Circuit go?

Tim Chung: On a couple fronts, Urban Circuit was really exciting. We were in this unfinished nuclear power plant—I’d be surprised if any of the competitors had prior experience in such a facility, or anything like it. I think that was illuminating both from an experiential point of view for the competitors, but also from a technology point of view, too.

One thing that I thought was really interesting was that we, DARPA, didn't need to make the venue more challenging. The real world is really that hard. There are places that were just really heinous for these robots to have to navigate through in order to look in every nook and cranny for artifacts. There were corners and doorways and small corridors and all these kind of things that really forced the teams to have to work hard, and the feedback was, why did DARPA have to make it so hard? But we didn’t, and in fact there were places that for the safety of the robots and personnel, we had to ensure the robots couldn’t go.

It sounds like some teams thought this course was on the more difficult side—do you think you tuned it to just the right amount of DARPA-hard?

Our calibration worked quite well. We were able to tease out and help refine and better understand what technologies are both useful and critical and also those technologies that might not necessarily get you the leap ahead capability. So as an example, the Urban Circuit really emphasized verticality, where you have to be able to sense, understand, and maneuver in three dimensions. Being able to capitalize on their robot technologies to address that verticality really stratified the teams, and showed how critical those capabilities are.

We saw teams that brought a lot of those capabilities do very well, and teams that brought baseline capabilities do what they could on the single floor that they were able to operate on. And so I think we got the Goldilocks solution for Urban Circuit that combined both difficulty and ambition.

Photos: Evan Ackerman/IEEE Spectrum

Two SubT Teams embedded networking equipment in balls that they could throw onto the course.

One of the things that I found interesting was that two teams independently came up with throwable network nodes. What was DARPA’s reaction to this? Is any solution a good solution, or was it more like the teams were trying to game the system?

You mean, do we want teams to game the rules in any way so as to get a competitive advantage? I don't think that's what the teams were doing. I think they were operating not only within the bounds of the rules, which permitted such a thing as throwable sensors where you could stand at the line and see how far you could chuck these things—not only was that acceptable by the rules, but anticipated. Behind the scenes, we tried to do exactly what these teams are doing and think through different approaches, so we explicitly didn't forbid such things in our rules because we thought it's important to have as wide an aperture as possible.

With these comms nodes specifically, I think they’re pretty clever. They were in some cases hacked together with a variety of different sports paraphernalia to see what would provide the best cushioning. You know, a lot of that happens in the field, and what it captured was that sometimes you just need to be up at two in the morning and thinking about things in a slightly different way, and that's when some nuggets of innovation can arise, and we see this all the time with operators in the field as well. They might only have duct tape or Styrofoam or whatever the case may be and that's when they come up with different ways to solve these problems. I think from DARPA’s perspective, and certainly from my perspective, wherever innovation can strike, we want to try to encourage and inspire those opportunities. I thought it was great, and it’s all part of the challenge.

Is there anything you can tell us about what your original plan had been for the Cave Circuit?

I can say that we’ve had the opportunity to go through a number of these caves scattered all throughout the country, and engage with caving communities—cavers clubs, speleologists that conduct research, and then of course the cave rescue community. The single biggest takeaway
is that every cave, and there are tens of thousands of them in the US alone, every cave has its own personality, and a lot of that personality is quite hidden from humans, because we can’t explore or access all of the cave. This led us to a number of different caves that were intriguing from a DARPA perspective but also inspirational for our Cave Circuit Virtual Competition.

How do you feel like the tuning was for the Virtual Cave Circuit?

The Virtual Competition, as you well know, was exciting in the sense that we could basically combine eight worlds into one competition, whereas the systems track competition really didn’t give us that opportunity. Even if we were able have held the Cave Circuit Systems Competition in person, it would have been at one site, and it would have been challenging to represent the level of diversity that we could with the Virtual Competition. So I think from that perspective, it’s clearly an advantage in terms of calibration—diversity gets you the ability to aggregate results to capture those that excel across all worlds as well as those that do well in one world or some worlds and not the others. I think the calibration was great in the sense that we were able to see the gamut of performance. Those that did well, did quite well, and those that have room to grow showed where those opportunities are for them as well.

We had to find ways to capture that diversity and that representativeness, and I think one of the fun ways we did that was with the different cave world tiles that we were able to combine in a variety of different ways. We also made use of a real world data set that we were able to take from a laser scan. Across the board, we had a really great chance to illustrate why virtual testing and simulation still plays such a dominant role in robotics technology development, and why I think it will continue to play an increasing role for developing these types of autonomy solutions.

Photo: Team CSIRO Data 61

How can systems track teams learn from their testing in whatever cave is local to them and effectively apply that to whatever cave environment is part of the final considering what the diversity of caves is?

I think that hits the nail on the head for what we as technologists are trying to discover—what are the transferable generalizable insights and how does that inform our technology development? As roboticists we want to optimize our systems to perform well at the tasks that they were designed to do, and oftentimes that means specialization because we get increased performance at the expense of being a generalist robot. I think in the case of SubT, we want to have our cake and eat it too—we want robots that perform well and reliably, but we want them to do so not just in one environment, which is how we tend to think about robot performance, but we want them to operate well in many environments, many of which have yet to be faced.

And I think that's kind of the nuance here, that we want robot systems to be generalists for the sake of being able to handle the unknown, namely the real world, but still achieve a high level of performance and perhaps they do that to their combined use of different technologies or advances in autonomy or perception approaches or novel mechanisms or mobility, but somehow they're still able, at least in aggregate, to achieve high performance.

We know these teams eagerly await any type of clue that DARPA can provide like about the SubT environments. From the environment previews for Tunnel, Urban, and even Cave, the teams were pivoting around and thinking a little bit differently. The takeaway, however, was that they didn't go to a clean sheet design—their systems were flexible enough that they could incorporate some of those specialist trends while still maintaining the notion of a generalist framework.

Looking ahead to the SubT Final, what can you tell us about the Louisville Mega Cavern?

As always, I’ll keep you in suspense until we get you there, but I can say that from the beginning of the SubT Challenge we had always envisioned teams of robots that are able to address not only the uncertainty of what's right in front of them, but also the uncertainty of what comes next. So I think the teams will be advantaged by thinking through subdomain awareness, or domain awareness if you want to generalize it, whether that means tuning multi-purpose robots, or deploying different robots, or employing your team of robots differently. Knowing which subdomain you are in is likely to be helpful, because then you can take advantage of those unique lessons learned through all those previous experiences then capitalize on that.

As far as specifics, I think the Mega Cavern offers many of the features important to what it means to be underground, while giving DARPA a pretty blank canvas to realize our vision of the SubT Challenge.

The SubT Final will be different from the earlier circuits in that there’s just one 60-minute run, rather than two. This is going to make things a lot more stressful for teams who have experienced bad robot days—why do it this way?

The preliminary round has two 30-minute runs, and those two runs are very similar to how we have done it during the circuits, of a single run per configuration per course. Teams will have the opportunity to show that their systems can face the obstacles in the final course, and it's the sum of those scores much like we did during the circuits, to help mitigate some of the concerns that you mentioned of having one robot somehow ruin their chances at a prize.

The prize round does give DARPA as well as the community a chance to focus on the top six teams from the preliminary round, and allows us to understand how they came to be at the top of the pack while emphasizing their technological contributions. The prize round will be one and done, but all of these teams we anticipate will be putting their best robot forward and will show the world why they deserve to win the SubT Challenge.

We’ve always thought that when called upon these robots need to operate in really challenging environments, and in the context of real world operations, there is no second chance. I don't think it's actually that much of a departure from our interests and insistence on bringing reliable technologies to the field, and those teams that might have something break here and there, that's all part of the challenge, of being resilient. Many teams struggled with robots that were debilitated on the course, and they still found ways to succeed and overcome that in the field, so maybe the rules emphasize that desire for showing up and working on game day which is consistent, I think, with how we've always envisioned it. This isn’t to say that these systems have to work perfectly, they just have to work in a way such that the team is resilient enough to tackle anything that they face.

It’s not too late for teams to enter for both the Virtual Track and the Systems Track to compete in the SubT Final, right?

Yes, that's absolutely right. Qualifications are still open, we are eager to welcome new teams to join in along with our existing competitors. I think any dark horse competitors coming into the Finals may be able to bring something that we haven't seen before, and that would be really exciting. I think it'll really make for an incredibly vibrant and illuminating final event.

The final event qualification deadline for the Systems Competition is April 21, and the qualification deadline for the Virtual Competition is June 29. More details here. Continue reading

Posted in Human Robots