Tag Archives: best

#437683 iRobot Remembers That Robots Are ...

iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.

If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.

iRobot calls the i3 “stylish,” and it does look pretty neat with that fabric top. Underneath, you get dual rubber primary brushes plus a side brush. There’s limited compatibility with the iRobot Home app and IFTTT, along with Alexa and Google Home. The i3 is also compatible with iRobot’s Clean Base, but that’ll cost you an extra $200, and iRobot refers to this bundle as the i3+.

The reason that the i3 only offers limited compatibility with iRobot’s app is that the i3 is missing the top-mounted camera that you’ll find in more expensive models. Instead, it relies on a downward-looking optical sensor to help it navigate, and it builds up a map as it’s cleaning by keeping track of when it bumps into obstacles and paying attention to internal sensors like a gyro and wheel odometers. The i3 can localize directly on its charging station or Clean Base (which have beacons on them that the robot can see if it’s close enough), which allows it to resume cleaning after emptying it’s bin or recharging. You’ll get a map of the area that the i3 has cleaned once it’s finished, but that map won’t persist between cleaning sessions, meaning that you can’t do things like set keep-out zones or identify specific rooms for the robot to clean. Many of the more useful features that iRobot’s app offers are based on persistent maps, and this is probably the biggest gap in functionality between the i3 and its more expensive siblings.

According to iRobot senior global product manager Sarah Wang, the kind of augmented dead-reckoning-based mapping that the i3 uses actually works really well: “Based on our internal and external testing, the performance is equivalent with our products that have cameras, like the Roomba 960,” she says. To get this level of performance, though, you do have to be careful, Wang adds. “If you kidnap i3, then it will be very confused, because it doesn’t have a reference to know where it is.” “Kidnapping” is a term that’s used often in robotics to refer to a situation in which an autonomous robot gets moved to an unmapped location, and in the context of a home robot, the best example of this is if you decide that you want your robot to vacuum a different room instead, so you pick it up and move it there.

iRobot used to make this easy by giving all of its robots carrying handles, but not anymore, because getting moved around makes things really difficult for any robot trying to keep track of where it is. While robots like the i7 can recover using their cameras to look for unique features that they recognize, the only permanent, unique landmark that the i3 can for sure identify is the beacon on its dock. What this means is that when it comes to the i3, even more than other Roomba models, the best strategy, is to just “let it do its thing,” says iRobot senior principal system engineer Landon Unninayar.

Photo: iRobot

The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400.

If you’re looking to spend a bit less than the $400 starting price of the i3, there are other options to be aware of as well. The Roomba 614, for example, does a totally decent job and costs $250. It’s scheduling isn’t very clever, it doesn’t make maps, and it won’t empty itself, but it will absolutely help keep your floors clean as long as you don’t mind being a little bit more hands-on. (And there’s also Neato’s D4, which offers basic persistent maps—and lasers!—for $330.)

The other thing to consider if you’re trying to decide between the i3 and a more expensive Roomba is that without the camera, the i3 likely won’t be able to take advantage of nearly as many of the future improvements that iRobot has said it’s working on. Spending more money on a robot with additional sensors isn’t just buying what it can do now, but also investing in what it may be able to do later on, with its more sophisticated localization and ability to recognize objects. iRobot has promised major app updates every six months, and our guess is that most of the cool new stuff is going to show in the i7 and s9. So, if your top priority is just cleaner floors, the i3 is a solid choice. But if you want a part of what iRobot is working on next, the i3 might end up holding you back. Continue reading

Posted in Human Robots

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437596 IROS Robotics Conference Is Online Now ...

The 2020 International Conference on Intelligent Robots and Systems (IROS) was originally going to be held in Las Vegas this week. Like ICRA last spring, IROS has transitioned to a completely online conference, which is wonderful news: Now everyone everywhere can participate in IROS without having to spend a dime on travel.

IROS officially opened yesterday, and the best news is that registration is entirely free! We’ll take a quick look at what IROS has on offer this year, which includes some stuff that’s brand news to IROS.

Registration for IROS is super easy, and did we mention that it’s free? To register, just go here and fill out a quick and easy form. You don’t even have to be an IEEE Member or anything like that, although in our unbiased opinion, an IEEE membership is well worth it. Once you get the confirmation email, go to https://www.iros2020.org/ondemand/, put in the email address you used to register, and that’s it, you’ve got IROS!

Here are some highlights:

Plenaries and Keynotes
Without the normal space and time constraints, you won’t have to pick and choose between any of the three plenaries or 10 keynotes. Some of them are fancier than others, but we’re used to that sort of thing by now. It’s worth noting that all three plenaries (and three of the 10 keynotes) are given by extraordinarily talented women, which is excellent to see.

Technical Tracks
There are over 1,400 technical talks, divided up into 12 categories of 20 sessions each. Note that each of the 12 categories that you see on the main page can be scrolled through to show all 20 of the sessions; if there’s a bright red arrow pointing left or right you can scroll, and if the arrow is transparent, you’ve reached the end.

On the session page, you’ll see an autoplaying advertisement (that you can mute but not stop), below which each talk has a preview slide, a link to a ~15 minute presentation video, and another link to a PDF of the paper. No supplementary videos are available, which is a bit disappointing. While you can leave a comment on the video, there’s no way of interacting with the author(s) directly through the IROS site, so you’ll have to check the paper for an email address if you want to ask a question.

Award Finalists
IROS has thoughtfully grouped all of the paper award finalists together into nine sessions. These are some truly outstanding papers, and it’s worth watching these sessions even if you’re not interested in specific subject matter.

Workshops and Tutorials
This stuff is a little more impacted by asynchronicity and on-demandedness, and some of the workshops and tutorials have already taken place. But IROS has done a good job at collecting videos of everything and making them easy to access, and the dedicated websites for the workshops and tutorials themselves sometimes have more detailed info. If you’re having trouble finding where the workshops and tutorial section is, try the “Entrance” drop-down menu up at the top.

IROS Original Series
In place of social events and lab tours, IROS this year has come up with the “IROS Original Series,” which “hosts unique content that would be difficult to see at in-person events.” Right now, there are some interviews with a diverse group of interesting roboticists, and hopefully more will show up later on.

Enjoy!
Everything on the IROS On-Demand site should be available for at least the next month, so there’s no need to try and watch a thousand presentations over three days (which is what we normally have to do). So, relax, and enjoy yourself a bit by browsing all the options. And additional content will be made available over the next several weeks, so make sure to check back often to see what’s new.

[ IROS 2020 ] Continue reading

Posted in Human Robots

#437564 How We Won the DARPA SubT Challenge: ...

This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.​

“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.

Team BARCS joins the SubT Virtual Track
The smoke incident happened more than a year after we first learned of the DARPA Subterranean Challenge. DARPA announced SubT early in 2018, and at that time, we were interested in building internal collaborations on multi-agent autonomy problems, and SubT seemed like the perfect opportunity. Though a few of us had backgrounds in robotics, the majority of our team was new to the field. We knew that submitting a proposal as a largely non-traditional robotics team from an organization not known for research in robotics was a risk. However, the Virtual Track gave us the opportunity to focus on autonomy and multi-agent teaming strategies, areas requiring skill in asynchronous computing and sensor data processing that are strengths of our Institute. The prevalence of open source code, small inexpensive platforms, and customizable sensors has provided the opportunity for experts in fields other than robotics to apply novel approaches to robotics problems. This is precisely what makes the Virtual Track of SubT appealing to us, and since starting SubT, autonomy has developed into a significant research thrust for our Institute. Plus, robots are fun!

After many hours of research, discussion, and collaboration, we submitted our proposal early in 2018. And several months later, we found out that we had won a contract and became a funded team (Team BARCS) in the SubT Virtual Track. Now we needed to actually make our strategy work for the first SubT Tunnel Circuit competition, taking place in August of 2019.

Building a team of virtual robots
A natural approach to robotics competitions like SubT is to start with the question of “what can X-type robot do” and then build a team and strategy around individual capabilities. A particular challenge for the SubT Virtual Track is that we can’t design our own systems; instead, we have to choose from a predefined set of simulated robots and sensors that DARPA provides, based on the real robots used by Systems Track teams. Our approach is to look at what a team of robots can do together, determining experimentally what the best team configuration is for each environment. By the final competition, ideally we will be demonstrating the value of combining platforms across multiple Systems Track teams into a single Virtual Track team. Each of the robot configurations in the competition has an associated cost, and team size is constrained by a total cost. This provides another impetus for limiting dependence on complex sensor packages, though our ranging preference is 3D lidar, which is the most expensive sensor!

Image: Michigan Tech Research Institute

The teams can rely on realistic physics and sensors but they start off with no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for their simulated robots.

One of the frequent questions we receive about the Virtual Track is if it’s like a video game. While it may look similar on the surface, everything under the hood in a video game is designed to service the game narrative and play experience, not require novel research in AI and autonomy. The purpose of simulations, on the other hand, is to include full physics and sensor models (including noise and errors) to provide a testbed for prototyping and developing solutions to those real-world challenges. We are starting with realistic physics and sensors but no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for our simulated robots.

Though the simulation is more like real life than a video game, it is not real life. Due to occasional software bugs, there are still non-physical events, like the robots falling through an invisible hole in the world or driving through a rock instead of over it or flipping head over heels when driving over a tiny lip between world tiles. These glitches, while sometimes frustrating, still allow the SubT Virtual platform to be realistic enough to support rapid prototyping of controller modules that will transition straightforwardly onto hardware, closing the loop between simulation and real-world robots.

Full autonomy for DARPA-hard scenarios
The Virtual Track requirement that the robotic agents be fully autonomous, rather than have a human supervisor, is a significant distinction between the Systems and Virtual Tracks of SubT. Our solutions must be hardened against software faults caused by things like missing and bad data since our robots can’t turn to us for help. In order for a team of robots to complete this objective reliably with no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to autonomously identify and manage faults and failures anywhere in the control chain.

The communications limitations in subterranean environments (both real and virtual) mean that we need to keep the amount of information shared between robots low, while making the usability of that information for joint decision-making high. This goal has guided much of our design for autonomous navigation and joint search strategy for our team. For example, instead of sharing the full SLAM map of the environment, our agents only share a simplified graphical representation of the space, along with data about frontiers it has not yet explored, and are able to merge its information with the graphs generated by other agents. The merged graph can then be used for planning and navigation without having full knowledge of the detailed 3D map.

The Virtual Track requires that the robotic agents be fully autonomous. With no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to identify and manage faults and failures anywhere in the control chain.

Since the objective of the SubT program is to advance the state-of-the-art in rapid autonomous exploration and mapping of subterranean environments by robots, our first software design choices focused on the mapping task. The SubT virtual environments are sufficiently rich as to provide interesting problems in building so-called costmaps that accurately separate obstructions that are traversable (like ramps) from legitimately impassible obstructions. An extra complication we discovered in the first course, which took place in mining tunnels, was that the angle of the lowest beam of the lidar was parallel to the down ramps in the tunnel environment, so they could not “see” the ground (or sometimes even obstructions on the ramp) until they got close enough to the lip of the ramp to receive lidar reflections off the bottom of the ramp. In this case, we had to not only change the costmap to convince the robot that there was safe ground to reach over the lip of the ramp, but also had to change the path planner to get the robot to proceed with caution onto the top of the ramp in case there were previously unseen obstructions on the ramp.

In addition to navigation in the costmaps, the robot must be able to generate its own goals to navigate to. This is what produces exploratory behavior when there is no map to start with. SLAM is used to generate a detailed map of the environment explored by a single robot—the space it has probed with its sensors. From the sensor data, we are able to extract information about the interior space of the environment while looking for holes in the data, to determine things like whether the current tunnel continues or ends, or how many tunnels meet at an intersection. Once we have some understanding of the interior space, we can place navigation goals in that space. These goals naturally update as the robot traverses the tunnel, allowing the entire space to be explored.

Sending our robots into the virtual unknown
The solutions for the Virtual Track competitions are tested by DARPA in multiple sequestered runs across many environments for each Circuit in the month prior to the Systems Track competition. We must wait until the joint award ceremony at the conclusion of the Systems Track to find out the results, and we are completely in the dark about placings before the awards are announced. It’s nerve-wracking! The challenges of the worlds used in the Circuit events are also hand-designed, so features of the worlds we use for development could be combined in ways we have not anticipated—it’s always interesting to see what features were prioritized after the event. We test everything in our controllers well enough to feel confident that we at least are submitting something reasonably stable and broadly capable, and once the solution is in, we can’t really do anything other than “let go” and get back to work on the next phase of development. Maybe it’s somewhat like sending your kid to college: “we did our best to prepare you for this world, little bots. Go do good.”

Image: Michigan Tech Research Institute

The first SubT competition was the Tunnel Circuit, featuring a labyrinthine environment that simulated human-engineered tunnels, including hazards such as vertical shafts and rubble.

The first competition was the Tunnel Circuit, in October 2019. This environment models human-engineered tunnels. Two substantial challenges in this environment were vertical shafts and rubble. Our team accrued 21 points over 15 competition runs in five separate tunnel environments for a second place finish, behind Team Coordinated Robotics.

The next phase of the SubT virtual competition was the Urban Circuit. Much of the difference between our Tunnel and Urban Circuit results came down to thorough testing to identify failure modes and implementations of checks and data filtering for fault tolerance. For example, in the SLAM nodes run by a single robot, the coordinates of the most recent sensor data are changed multiple times during processing and integration into the current global 3D map of the “visited” environment stored by that robot. If there is lag in IMU or clock data, the observation may be temporarily registered at a default location that is very far from the actual position. Since most of our decision processes for exploration are downstream from SLAM, this can cause faulty or impossible goals to be generated, and the robots then spend inordinate amounts of time trying to drive through walls. We updated our method to add a check to see if the new map position has jumped a far distance from the prior map position, and if so, we threw that data out.

Image: Michigan Tech Research Institute

In open spaces like the rooms in the Urban circuit, we adjusted our approach to exploration through graph generation to allow the robots to accurately identify viable routes while helping to prevent forays off platform edges.

Our approach to exploration through graph generation based on identification of interior spaces allowed us to thoroughly explore the centers of rooms, although we did have to make some changes from the Tunnel circuit to achieve that. In the Tunnel circuit, we used a simplified graph of the environment based on landmarks like intersections. The advantage of this approach is that it is straightforward for two robots to compare how the graphs of the space they explored individually overlap. In open spaces like the rooms in the Urban circuit, we chose to instead use a more complex, less directly comparable graph structure based on the individual robot’s trajectory. This allowed the robots to accurately identify viable routes between features like subway station platforms and subway tracks, as well as to build up the navigation space for room interiors, while helping to prevent forays off the platform edges. Frontier information is also integrated into the graph, providing a uniform data structure for both goal selection and route planning.

The results are in!
The award ceremony for the Urban Circuit was held concurrently with the Systems Track competition awards this past February in Washington State. We sent a team representative to participate in the Technical Interchange Meeting and present the approach for our team, and the rest of us followed along from our office space on the DARPAtv live stream. While we were confident in our solution, we had also been tracking the online leaderboard and knew our competitors were going to be submitting strong solutions. Since the competition environments are hand-designed, there are always novel challenges that could be presented in these environments as well. We knew we would put up a good fight, but it was very exciting to see BARCS appear in first place!

Any time we implement a new module in our control system, there is a lot of parameter tuning that has to happen to produce reliably good autonomous behavior. In the Urban Circuit, we did not sufficiently test some parameter values in our exploration modules. The effect of this was that the robots only chose to go down small hallways after they explored everything else in their environment, which meant very often they ran out of time and missed a lot of small rooms. This may be the biggest source of lost points for us in the Urban Circuit. One of our major plans going forward from the Urban Circuit is to integrate more sophisticated node selection methods, which can help our robots more intelligently prioritize which frontier nodes to visit. By going through all three Circuit challenges, we will learn how to appropriately add weights to the frontiers based on features of the individual environments. For the Final Challenge, when all three Circuit environments will be combined into large systems, we plan to implement adaptive controllers that will identify their environments and use the appropriate optimized parameters for that environment. In this way, we expect our agents to be able to (for example) prioritize hallways and other small spaces in Urban environments, and perhaps prioritize large openings over small in the Cave environments, if the small openings end up being treacherous overall.

Next for our team: Cave Circuit
Coming up next for Team BARCS is the Virtual Cave Circuit. We are in the middle of testing our hypothesis that our controller will transition from UGVs to UAVs and developing strategies for refining our solution to handle Cave Circuit environmental hazards. The UAVs have a shorter battery life than the UGVs, so executing a joint exploration strategy will also be a high priority for this event, as will completing our work on graph sharing and merging, which will give our robot teams more sophisticated options for navigation and teamwork. We’re reaching a threshold in development where we can start increasing the “smarts” of the robots, which we anticipate will be critical for the final competition, where all of the challenges of SubT will be combined to push the limits of innovation. The Cave Circuit will also have new environmental challenges to tackle: dynamic features such as rock falls have been added, which will block previously accessible passages in the cave environment. We think our controllers are well-poised to handle this new challenge, and we’re eager to find out if that’s the case.

As of now, the biggest worries for us are time and team composition. The Cave Circuit deadline has been postponed to October 15 due to COVID-19 delays, with the award ceremony in mid-November, but there have also been several very compelling additions to the testbed that we would like to experiment with before submission, including droppable networking ‘breadcrumbs’ and new simulated platforms. There are design trade-offs when balancing general versus specialist approaches to the controllers for these robots—since we are adding UAVs to our team for the first time, there are new decisions that will have to be made. For example, the UAVs can ascend into vertical spaces, but only have a battery life of 20 minutes. The UGVs by contrast have 90 minute battery life. One of our strategies is to do an early return to base with one or more agents to buy down risk on making any artifact reports at all for the run, hedging against our other robots not making it back in time, a lesson learned from the Tunnel Circuit. Should a UAV take on this role, or is it better to have them explore deeper into the environment and instead report their artifacts to a UGV or network node, which comes with its own risks? Testing and experimentation to determine the best options takes time, which is always a worry when preparing for a competition! We also anticipate new competitors and stiffer competition all around.

Image: Michigan Tech Research Institute

Team BARCS has now a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021.

Going forward from the Cave Circuit, we will have a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. What we are most excited about is increasing the level of intelligence of the agents in their teamwork and joint exploration of the environment. Since we will have (hopefully) built up robust approaches to handling each of the specific types of environments in the Tunnel, Urban, and Cave circuits, we will be aiming to push the limits on collaboration and efficiency among the agents in our team. We view this as a central research contribution of the Virtual Track to the Subterranean Challenge because intelligent, adaptive, multi-robot collaboration is an upcoming stage of development for integration of robots into our lives.

The Subterranean Challenge Virtual Track gives us a bridge for transitioning our more abstract research ideas and algorithms relevant to this degree of autonomy and collaboration onto physical systems, and exploring the tangible outcomes of implementing our work in the real world. And the next time there’s an incident in the basement of our building, the robots (and humans) of Team BARCS will be ready to respond.

Richard Chase, Ph.D., P.E., is a research scientist at Michigan Tech Research Institute (MTRI) and has 20 years of experience developing robotics and cyber physical systems in areas from remote sensing to autonomous vehicles. At MTRI, he works on a variety of topics such as swarm autonomy, human-swarm teaming, and autonomous vehicles. His research interests are the intersection of design, robotics, and embedded systems.

Sarah Kitchen is a Ph.D. mathematician working as a research scientist and an AI/Robotics focus area leader at MTRI. Her research interests include intelligent autonomous agents and multi-agent collaborative teams, as well as applications of autonomous robots to sensing systems.

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001118C0124 and is released under Distribution Statement (Approved for Public Release, Distribution Unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Continue reading

Posted in Human Robots