Category Archives: Human Robots
#439768 DARPA SubT Finals: Robot Operator Wisdom
Each of the DARPA Subterranean Challenge teams is allowed to bring up to 20 people to the Louisville Mega Cavern for the final event. Of those 20 people, only five can accompany the robots to the course staging area to set up the robots. And of those five, just one person can be what DARPA calls the Human Supervisor.
The Human Supervisor role, which most teams refer to as Robot Operator, is the only person allowed to interface with the robots while they're on the course. Or, it's probably more accurate to say that the team's base station computer is the only thing allowed to interface with robots on the course, and the human operator is the only person allowed to use the base station. The operator can talk to their teammates at the staging area, but that's about it—the rest of the team can't even look at the base station screens.
Robot operator is a unique job that can be different for each team, depending on what kinds of robots that team has deployed, how autonomous those robots are, and what strategy the team is using during the competition. On the second day of the SubT preliminary competition, we talked with robot operators from all eight Systems Track teams to learn more about their robots, exactly what they do during the competition runs, and their approach to autonomy.
“DARPA is interested in approaches that are highly autonomous without the need for substantive human interventions; capable of remotely mapping and/or navigating complex and dynamic terrain; and able to operate with degraded and unreliable communication links. The team is permitted to have a single Human Supervisor at a Base Station… The Human Supervisor is permitted to view, access, and/or analyze both course data and status data. Only the Human Supervisor is permitted to use wireless communications with the systems during the competition run.” DARPA's idea here is that most of the robots competing in SubT will be mostly autonomous most of the time, hence their use of “supervisor” rather than “operator.” Requiring substantial human-in-the-loop-ness is problematic for a couple of reasons—first, direct supervision requires constant communication, and we've seen how problematic communication can be on the SubT course. And second, operation means the need for a skilled and experienced operator, which is fine if you're a SubT team that's been practicing for years but could be impractical for a system of robots that's being deployed operationally.
So how are teams making the robot operator role work, and how close are they to being robot supervisors instead? I went around the team garages on the second day of preliminary runs, and asked each team operator the same three questions about their roles. I also asked the operators, “What is one question I should I ask the next operator I talk to?” I added this as a bonus question, with each operator answering a question suggested by a different team operator.
Team RobotikaRobot Operator: Martin DlouhyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
This is the third time we've participated in a SubT event; we've tried various robots, small ones, bigger ones, but for us, these two robots seem to be optimal. Because we are flying from Czech Republic, the robots have to fit in our checked luggage. We also don't have the smaller robots or the drones that we had because like three weeks ago, we didn't even know if we would be allowed to enter the United States. So this is optimal for what we can bring to the competition, and we would like to demonstrate that we can do something with a simple solution.
Once your team of robots is on the course, what do you do during the run?
We have two robots, so it's easier than for some other teams. When the robots are in network range, I have some small tools to locally analyze data to help find artifacts that are hard for the robots to see, like the cellphone or the gas source. If everything goes fine, I basically don't have to be there. We've been more successful in the Virtual SubT competition because over half our team are software developers. We've really pushed hard to make the Virtual and System software as close as possible, and in Virtual, it's fully autonomous from beginning to end. There's one step that I do manually as operator—the robots have neural networks to recognize artifacts, but it's on me to click confirm to submit the artifact reports to DARPA.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
I would actually like an operator-less solution, and we could run it, but it's still useful to have a human operator—it's safer for the robot, because it's obvious to a human when the robot is not doing well.
Bonus operator question: What are the lowest and highest level decisions you have to make?
The lowest level is, I open the code and change it on the fly. I did it yesterday to change some of the safety parameters. I do this all the time, it's normal. The highest level is asking the team, “guys, how are we going to run our robots today.”
Team MARBLERobot Operator: Dan RileyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We've been using the Huskies [wheeled robots] since the beginning of the competition, it's a reliable platform with a lot of terrain capability. It's a workhorse that can do a lot of stuff. We were also using a tank-like robot at one time, but we had traversability issues so we decided to drop that one for this competition. We also had UAVs, because there's a lot of value in not having to worry about the ground while getting to areas that you can't get to with a ground robot, but unfortunately we had to drop that too because of the number of people and time that we had. We decided to focus on what we knew we could do well, and make sure that our baseline system was super solid. And we added the Spot robots within the last two months mostly to access areas that the Huskies can't, like going up and down stairs and tricky terrain. It's fast, and we really like it.
Our team of robots is closely related to our deployment strategy. The way our planner and multi-robot coordination works is that the first robot really just plows through the course looking for big frontiers and new areas, and then subsequent robots will fill in the space behind looking for more detail. So we deploy the Spots first to push the environment since they're faster than the Huskies, and the Huskies will follow along and fill in the communications network.
We know we don't want to run five robots tomorrow. Before we got here, we saw the huge cavern and thought that running more robots would be better. But based on the first couple runs, we now know that the space inside is much smaller, so we think four robots is good.
Once your team of robots is on the course, what do you do during the run?
The main thing I'm watching for is artifact reports from robots. While I'm waiting for artifact reports, I'm monitoring where the robots are going, and mainly I want to see them going to new areas. If I see them backtracking or going where another robot has explored already, I have the ability to send them new goal points in another area. When I get an artifact report, I look at the image to verify that it's a good report. For objects that may not be visible, like the cell phone [which has to be detected through the wireless signal it emits], if it's early in the mission I'll generally wait and see if I get any other reports from another robot on it. The localization isn't great on those artifacts, so once I do submit, if it doesn't score, I have to look around to find an area where it might be. For instance, we found this giant room with lots of shelves and stuff, and that's a great place to put a cell phone, and sure enough, that's where the cell phone was.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
We pride ourselves on our autonomy. From the very beginning, that was our goal, and actually in earlier competitions I had very little control over the robot, I could not even send it a goal point. All I was getting was reports—it was a one-way street of information. I might have been able to stop the robot, but that was about it. Later on, we added the goal point capability and an option to drive the robot if I need to take over to get it out of a situation.
I'm actually the lead for our Virtual Track team as well, and that's already decision-free. We're running the exact same software stack on our robots, and the only difference is that the virtual system also does artifact reporting. Honestly, I'd say that we're more effective having the human be able to make some decisions, but the exact same system works pretty well without having any human at all.
Bonus operator question: How much sleep did you get last night?
I got eight hours, and I could have had more, except I sat around watching TV for a while. We stressed ourselves out a lot during the first two competitions, and we had so many problems. It was horrible, so we said, “we're not doing that again!” A lot of our problems started with the setup and launching phase, just getting the robots started up and ready to go and out of the gate. So we spent a ton of time making sure that our startup procedures were all automated. And when you're able to start up easily, things just go well.
Team ExplorerRobot Operator: Chao CaoTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We tried to diversify our robots for the different kinds of environments in the challenge. We have wheeled vehicles, aerial vehicles, and legged vehicles (Spot robots). Our wheeled vehicles are different sizes; two are relatively big and one is smaller, and two are articulated in the middle to give them better mobility performance in rough terrain. Our smaller drones can be launched from the bigger ground robots, and we have a larger drone with better battery life and more payload.
In total, there are 11 robots, which is quite a lot to be managed by a single human operator under a constrained time limit, but if we manage those robots well, we can explore quite a large three dimensional area.
Once your team of robots is on the course, what do you do during the run?
Most of the time, to be honest, it's like playing a video game. It's about allocating resources to gain rewards (which in this case are artifacts) by getting the robots spread out to maximize coverage of the course. I'm monitoring the status of the robots, where they're at, and what they're doing. Most of the time I rely on the autonomy of the robots, including for exploration, coordination between multiple robots, and detecting artifacts. But there are still times when the robots might need my help, for example yesterday one of the bigger robots got itself stuck in the cave branch but I was able to intervene and get it to drive out.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Humans have a semantic understanding of the environment. Just by looking at a camera image, I can predict what an environment will be like and how risky it will be, but robots don't have that kind of higher level decision capability. So I might want a specific kind of robot to go into a specific kind of environment based on what I see, and I can redirect robots to go into areas that are a better fit for them. For me as an operator, at least from my personal experience, I think it's still quite challenging for robots to perform this kind of semantic understanding, and I still have to make those decisions.
Bonus operator question: What is your flow for decision making?
Before each run, we'll have a discussion among all the team members to figure out a rough game plan, including a deployment sequence—which robots go first, should the drones be launched from the ground vehicles or from the staging area. During the run, things are changing, and I have to make decisions based on the environment. I'll talk to the pit crew about what I can see through the base station, and then I'll make an initial proposal based on my instincts for what I think we should do. But I'm very focused during the run and have a lot of tasks to do, so my teammates will think about time constraints and how conservative we want to be and where other robots are because I can't think through all of those possibilities, and then they'll give me feedback. Usually this back and forth is quick and smooth.
The Robot Operator is the only person allowed to interface with the robots while they're on the course—the operators pretty much controls the entire run by themselves.DARPA
Team CTU-CRAS-NORLABRobot Operator: Vojtech SalnskyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We chose many different platforms. We have some tracked robots, wheeled robots, Spot robots, and some other experimental UGVs [small hexapods and one big hexapod], and every UGV has a different ability to traverse terrain, and we are trying to cover all possible locomotion types to be able to traverse anything on the course. Besides the UGVs, we're using UAVs as well that are able to go through both narrow corridors and bigger spaces.
We brought a large number of robots, but the number that we're using, about ten, is enough to be able to explore a large part of the environment. Deploying more would be really hard for the pit crew of only five people, and there isn't enough space for more robots.
Once your team of robots is on the course, what do you do during the run?
It differs run by run, but the robots are mostly autonomous, so they decide where to go and I'm looking for artifact detections uploaded by the robots and approving or disapproving them. If I see that a robot is stuck somewhere, I can help it decide where to go. If it looks like a robot may lose communications, I can move some robots to make a chain from other robots to extend our network. I can do high level direction for exploration, but I don't have to—the robots are updating their maps and making decisions to best explore the whole environment.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Terrain assessment is subtle. At a higher level, the operator has to decide where to send a walking robot and where to send a rolling robot. It's tiny details on the ground and a feeling about the environment that help the operator make those decisions, and that is not done autonomously.
Bonus operator question: How much bandwidth do you have?
I'm on the edge. I have a map, I have some subsampled images, I have detections, I have topological maps, but it would be better to have everything in 4K and dense point clouds.
Team CSIRO Data61Robot Operator: Brendan TiddTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We've got three robot types that are here today—Spot legged robots, big tracked robots called Titans, and drones. The legged ones have been pretty amazing, especially for urban environments with narrow stairs and doorways. The tracked robots are really good in the tricky terrain of cave environments. And the drones can obviously add situational awareness from higher altitudes and detect those high artifacts.
Once your team of robots is on the course, what do you do during the run?
We use the term “operator” but I'm actually supervising. Our robots are all autonomous, they all know how to divide and conquer, they're all going to optimize exploring for depth, trying to split up where they can and not get in each other's way. In particular the Spots and the Titans have a special relationship where the Titan will give way to the Spot if they ever cross paths, for obvious reasons. So my role during the run is to coordinate node placement, that's something that we haven't automated—we've got a lot of information that comes back that I use to decide on good places to put nodes, and probably the next step is to automate that process. I also decide where to launch the drone. The launch itself is one click, but it still requires me to know where a good place is. If everything goes right, in general the robots will just do their thing.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
The node drop thing is vital, but I think it's quite a complex thing to automate because there are so many different aspects to consider. The node mesh is very dynamic, it's affected by all the robots that are around it and obviously by the environment. Similarly, the drone launch, but that requires the robots to know when it's worth it to launch a drone. So those two things, but also pushing on the nav stack to make sure it can handle the crazy stuff. And I guess the other side is the detection. It's not a trivial thing knowing what's a false positive or not, that's a hard thing to automate.
Bonus operator question: How stressed are you, knowing that it's just you controlling all the robots during the run?
Coping with that is a thing! I've got music playing when I'm operating, I actually play in a metal band and we get on stage sometimes and the feeling is very similar, so it's really helpful to have the music there. But also the team, you know? I'm confident in our system, and if I wasn't, that would really affect my mental state. But we test a lot, and all that preparedness helps with the stress.
Team CoSTARRobot Operator: Kyohei OtsuTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We have wheeled vehicles, legged vehicles, and aerial drones, so we can cover many terrains, handle stairs, and fly over obstacles. We picked three completely different mobility systems to be able to use many different strategies. The robots can autonomously adjust their roles by themselves; some explore, some help with communication for other robots. The number of robots we use depends on the environment—yesterday we deployed seven robots onto the course because we assumed that the environment would be huge, but it's a bit smaller than we expected, so we'll adapt our number to fit that environment.
Once your team of robots is on the course, what do you do during the run?
Our robots are autonomous, and I think we have very good autonomy software. During setup the robots need some operator attention; I have to make sure that everything is working including sensors, mobility systems, and all the algorithms. But after that, once I send the robot into the course, I totally forget about it and focus on another robot. Sometimes I intervene to better distribute our team of robots—that's something that a human is good at, using prior knowledge to understand the environment. And I look at artifact reports, that's most of my job.
In the first phases of the Subterranean Challenge, we were getting low level information from the robots and sometimes using low level commands. But as the project proceeded and our technology matured, we found that it was too difficult for the operator, so we added functionality for the robot to make all of those low level decisions, and the operator just deals with high level decisions.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible? [answered by CoSTAR co-Team Lead Joel Burdick]
Two things: the system reports that it thinks it found an artifact, and the operator has to confirm yes or no. He has to also confirm that the location seems right. The other thing is that our multi-robot coordination isn't as sophisticated as it could be, so the operator may have to retask robots to different areas. If we had another year, we'd be much closer to automating those things.
Bonus Operator Question: Would you prefer if your system was completely autonomous and your job was not necessary?
Yeah, I'd prefer that!
Team Coordinated RoboticsRobot Operator: Kevin KnoedlerTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
The ideal mix in my mind is a fleet of small drones with lidar, but they are very hard to test, and very hard to get right. Ground vehicles aren't necessarily easier to get right, but they're easier to test, and if you can test something, you're a lot more likely to succeed. So that's really the big difference with the team of robots we have here.
Once your team of robots is on the course, what do you do during the run?
Some of the robots have an automatic search function where if they find something they report back, and what I'd like to be doing is just monitoring. But, the search function only works in larger areas. So right now the goal is for me to drive them through the narrow areas, get them into the wider areas, and let them go, but getting them to that search area is something that I mostly need to do manually one at a time.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Ideally, the robots would be able to get through those narrow areas on their own. It's actually a simpler problem to solve than larger areas, it's just not where we focused our effort.
Bonus operator question: How many interfaces do you use to control your robots?
We have one computer with two monitors, one controller, and that's it.
Team CERBERUSRobot Operator: Marco TranzattoTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We have a mix of legged and flying robots, supported by a rover carrying a wireless antenna. The idea is to take legged robots for harsh environments where wheel robots may not perform as well, combined with aerial scouts that can explore the environment fast to provide initial situational awareness to the operator so that I can decide where to deploy the legged machines. So the goal is to combine the legged and flying robots in a unified mission to give as much information as possible to the human operator. We also had some bigger robots, but we found them to be a bit too big for the environment that DARPA has prepared for us, so we're not going to deploy them.
Once your team of robots is on the course, what do you do during the run?
We use two main modes: one is fully autonomous on the robots, and the other one is supervised autonomy where I have an overview of what the robots are doing and can override specific actions. Based on the high level information that I can see, I can decide to control a single robot to give it a manual waypoint to reposition it to a different frontier inside the environment. I can go from high level control down to giving these single commands, but the commands are still relatively high level, like “go here and explore.” Each robot has artifact scoring capabilities, and all these artifact detections are sent to the base station once the robot is in communication range, and the human operator has to say, “okay this looks like a possible artifact so I accept it” and then can submit the position either as reported by the robot or the optimized position reported by the mapping server.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Each robot is autonomous by itself. But the cooperation between robots is still like… The operator has to set bounding boxes to tell each robot where to explore. The operator has a global overview, and then inside these boxes, the robots are autonomous. So I think at the moment in our pipeline, we still need a centralized human supervisor to say which robot explores in which direction. We are close to automating this, but we're not there yet.
Bonus operator question: What is one thing you would add to make your life as an operator easier?
I would like to have a more centralized way to give commands to the robots. At the moment I need to select each robot and give it a specific command. It would be very helpful to have a centralized map where I can tell a robot to say explore in a given area while considering data from a different robot. This was in our plan, but we didn't manage to deploy it yet. Continue reading
#439766 Understanding human-robot interaction ...
Robotic body-weight support (BWS) devices can play a key role in helping people with neurological disorders to improve their walking. The team that developed the advanced body-weight support device RYSEN in 2018 has since gained more fundamental insight in BWS but also concludes that improvement in this field is necessary. They find that recommendations for the optimal therapy settings have to be customized to each device and that developers should be more aware of the interaction between patient and the device. The researchers have published the results of their evaluation in Science Robotics on Wednesday September 22. Continue reading
#439753 DARPA SubT Finals: Meet the Teams
This is it! This week, we're at the DARPA SubTerranean Challenge Finals in Louisville KY, where more than two dozen Systems Track and Virtual Track teams will compete for millions of dollars in prize money and being able to say “we won a DARPA challenge,” which is of course priceless.
We've been following SubT for years, from Tunnel Circuit to Urban Circuit to Cave (non-) Circuit. For a recent recap, have a look at this post-cave pre-final article that includes an interview with SubT Program Manager Tim Chung, but if you don't have time for that, the TLDR is that this week we're looking at both a Virtual Track as well as a Systems Track with physical robots on a real course. The Systems Track teams spent Monday checking in at the Louisville Mega Cavern competition site, and we asked each team to tell us about how they've been preparing, what they think will be most challenging, and what makes them unique.
Team CERBERUS
Team CERBERUS
CERBERUS
Country
USA, Switzerland, United Kingdom, Norway
Members
University of Nevada, Reno
ETH Zurich, Switzerland
University of California, Berkeley
Sierra Nevada Corporation
Flyability, Switzerland
Oxford Robotics Institute, United Kingdom
Norwegian University for Science and Technology (NTNU), Norway
Robots
TBA
Follow Team
Website
@CerberusSubt
Q&A: Team Lead Kostas Alexis
How have you been preparing for the SubT Final?
First of all this year's preparation was strongly influenced by Covid-19 as our team spans multiple countries, namely the US, Switzerland, Norway, and the UK. Despite the challenges, we leveled up both our weekly shake-out events and ran a 2-month team-wide integration and testing activity in Switzerland during July and August with multiple tests in diverse underground settings including multiple mines. Note that we bring a brand new set of 4 ANYmal C robots and a new generation of collision-tolerant flying robots so during this period we further built new hardware.
What do you think the biggest challenge of the SubT Final will be?
We are excited to see how the combination of vastly large spaces available in Mega Caverns can be combined with very narrow cross-sections as DARPA promises and vertical structures. We think that terrain with steep slopes and other obstacles, complex 3D geometries, as well as the dynamic obstacles will be the core challenges.
What is one way in which your team is unique, and why will that be an advantage during the competition?
Our team coined early on the idea of legged and flying robot combination. We have remained focused on this core vision of ours and also bring fully own-developed hardware for both legged and flying systems. This is both our advantage and – in a way – our limitation as we spend a lot of time in its development. We are fully excited about the potential we see developing and we are optimistic that this will be demonstrated in the Final Event!
Team Coordinated Robotics
Team Coordinated Robotics
Coordinated Robotics
Country
USA
Members
California State University Channel Islands
Oke Onwuka
Sequoia Middle School
Robots
TBA
Q&A: Team Lead Kevin Knoedler
How have you been preparing for the SubT Final?
Coordinated Robotics has been preparing for the SubT Final with lots of testing on our team of robots. We have been running them inside, outside, day, night and all of the circumstances that we can come up with. In Kentucky we have been busy updating all of the robots to the same standard and repairing bits of shipping damage before the Subt Final.
What do you think the biggest challenge of the SubT Final will be?
The biggest challenge for us will be pulling all of the robots together to work as a team and make sure that everything is communicating together. We did not have lab access until late July and so we had robots at individuals homes, but were generally only testing one robot at a time.
What is one way in which your team is unique, and why will that be an advantage during the competition?
Coordinated Robotics is unique in a couple of different ways. We are one of only two unfunded teams so we take a lower budget approach to solving lots of the issues and that helps us to have some creative solutions. We are also unique in that we will be bringing a lot of robots (23) so that problems with individual robots can be tolerated as the team of robots continues to search.
Team CoSTAR
Team CoSTAR
CoSTAR
Country
USA, South Korea, Sweden
Members
Jet Propulsion Laboratory
California Institute of Technology
Massachusetts Institute of Technology
KAIST, South Korea
Lulea University of Technology, Sweden
Robots
TBA
Follow Team
Website
Q&A: Caltech Team Lead Joel Burdick
How have you been preparing for the SubT Final?
Since May, the team has made 4 trips to a limestone cave near Lexington Kentucky (and they are just finishing a week-long “game” there yesterday). Since February, parts or all of the team have been testing 2-3 days a week in a section of the abandoned Subway system in downtown Los Angeles.
What do you think the biggest challenge of the SubT Final will be?
That will be a tough one to answer in advance. The expected CoSTAR-specific challenges are of course the complexity of the test-site that DARPA has prepared, fatigue of the team, and the usual last-minute hardware failures: we had to have an entire new set of batteries for all of our communication nodes FedExed to us yesterday. More generally, we expect the other teams to be well prepared. Speaking only for myself, I think there will be 4-5 teams that could easily win this competition.
What is one way in which your team is unique, and why will that be an advantage during the competition?
Previously, our team was unique with our Boston Dynamic legged mobility. We've heard that other teams maybe using Spot quadrupeds as well. So, that may no longer be a uniqueness. We shall see! More importantly, we believe our team is unique in the breadth of the participants (university team members from U.S., Europe, and Asia). Kind of like the old British empire: the sun never sets on the geographic expanse of Team CoSTAR.
Team CSIRO Data61
Team CSIRO Data61
CSIRO Data61
Country
Australia, USA
Members
Commonwealth Scientific and Industrial Research Organisation, Australia
Emesent, Australia
Georgia Institute of Technology
Robots
TBA
Follow Team
Website
Q&A: SubT Principal Investigator Navinda Kottege
How have you been preparing for the SubT Final?
Test, test, test. We've been testing as often as we can, simulating the competition conditions as best we can. We're very fortunate to have an extensive site here at our CSIRO lab in Brisbane that has enabled us to construct quite varied tests for our full fleet of robots. We have also done a number of offsite tests as well.
After going through the initial phases, we have converged on a good combination of platforms for our fleet. Our work horse platform from the Tunnel circuit has been the BIA5 ATR tracked robot. We have recently added Boston Dynamics Spot quadrupeds to our fleet and we are quite happy with their performance and the level of integration with our perception and navigation stack. We also have custom designed Subterra Navi drones from Emesent. Our fleet consists of two of each of these three platform types. We have also designed and built a new 'Smart node' for communication with the Rajant nodes. These are dropped from the tracked robots and automatically deploy after a delay by extending out ground plates and antennae. As described above, we have been doing extensive integration testing with the full system to shake out bugs and make improvements.
What do you think the biggest challenge of the SubT Final will be?
The biggest challenge is the unknown. It is always a learning process to discover how the robots respond to new classes of obstacle; responding to this on the fly in a new environment is extremely challenging. Given the format of two preliminary runs and one prize run, there is little to no margin for error compared to previous circuit events where there were multiple runs that contributed to the final score. Any significant damage to robots during the preliminary runs would be difficult to recover from to perform in the final run.
What is one way in which your team is unique, and why will that be an advantage during the competition?
Our fleet uses a common sensing, mapping and navigation system across all robots, built around our Wildcat SLAM technology. This is what enables coordination between robots, and provides the accuracy required to locate detected objects. This had allowed us to easily integrate different robot platforms into our fleet. We believe this 'homogenous sensing on heterogenous platforms' paradigm gives us a unique advantage in reducing overall complexity of the development effort for the fleet and also allowing us to scale our fleet as needed. Having excellent partners in Emesent and Georgia Tech and having their full commitment and support is also a strong advantage for us.
Team CTU-CRAS-NORLAB
Team CTU-CRAS-NORLAB
CTU-CRAS-NORLAB
Country
Czech Republic, Canada
Members
Czech Technological University, Czech Republic
Université Laval, Canada
Robots
TBA
Follow Team
Website
Twitter
Q&A: Team Lead Tomas Svoboda
How have you been preparing for the SubT Final?
We spent most of the time preparing new platforms as we made a significant technology update. We tested the locomotion and autonomy of the new platforms in Bull Rock Cave, one of the largest caves in Czechia. We also deployed the robots in an old underground fortress to examine the system in an urban-like underground environment. The very last weeks were, however, dedicated to integration tests and system tuning.
What do you think the biggest challenge of the SubT Final will be?
Hard to say, but regarding the expected environment, the vertical shafts might be the most challenging since they are not easy to access to test and tune the system experimentally. They would also add challenges to communication.
What is one way in which your team is unique, and why will that be an advantage during the competition?
Not sure about the other teams, but we plan to deploy all kinds of ground vehicles, tracked, wheeled, and legged platforms accompanied by several drones. We hope the diversity of the platform types would be beneficial for adapting to the possible diversity of terrains and underground challenges. Besides, we also hope the tuned communication would provide access to robots in a wider range than the last time. Optimistically, we might keep all robots connected to the communication infrastructure built during the mission, albeit the bandwidth is very limited, but should be sufficient for artifacts reporting and high-level switching of the robots' goals and autonomous behavior.
Team Explorer
Team Explorer
Explorer
Country
USA
Members
Carnegie Mellon University
Oregon State University
Robots
TBA
Follow Team
Website
Facebook
Q&A: Team Co-Lead Sebastian Scherer
How have you been preparing for the SubT Final?
Since we expect DARPA to have some surprises on the course for us, we have been practicing in a wide range of different courses around Pittsburgh including an abandoned hospital complex, a cave and limestone and coal mines. As the finals approached, we were practicing at these locations nearly daily, with debrief and debugging sessions afterward. This has helped us find the advantages of each of the platforms, ways of controlling them, and the different sensor modalities.
What do you think the biggest challenge of the SubT Final will be?
For our team the biggest challenges are steep slopes for the ground robots and thin loose obstacles that can get sucked into the props for the drones as well as narrow passages.
What is one way in which your team is unique, and why will that be an advantage during the competition?
We have developed a heterogeneous team for SubT exploration. This gives us an advantage since there is not a single platform that is optimal for all SubT environments. Tunnels are optimal for roving robots, urban environments for walking robots, and caves for flying. Our ground robots and drones are custom-designed for navigation in rough terrain and tight spaces. This gives us an advantage since we can get to places not reachable by off-the-shelf platforms.
Team MARBLE
Team MARBLE
MARBLE
Country
USA
Members
University of Colorado, Boulder
University of Colorado, Denver
Scientific Systems Company, Inc.
University of California, Santa Cruz
Robots
TBA
Follow Team
Q&A: Project Engineer Gene Rush
How have you been preparing for the SubT Final?
Our team has worked tirelessly over the past several months as we prepare for the SubT Final. We have invested most of our time and energy in real-world field deployments, which help us in two major ways. First, it allows us to repeatedly test the performance of our full autonomy stack, and second, it provides us the opportunity to emphasize Pit Crew and Human Supervisor training. Our PI, Sean Humbert, has always said “practice, practice, practice.” In the month leading up to the event, we stayed true to this advice by holding 10 deployments across a variety of environments, including parking garages, campus buildings at the University of Colorado Boulder, and the Edgar Experimental Mine.
What do you think the biggest challenge of the SubT Final will be?
I expect the most difficult challenge will is centered around autonomous high-level decision making. Of course, mobility challenges, including treacherous terrain, stairs, and drop offs will certainly test the physical capabilities of our mobile robots. However, the scale of the environment is so great, and time so limited, that rapidly identifying the areas that likely have human survivors is vitally important and a very difficult open challenge. I expect most teams, ours included, will utilize the intuition of the Human Supervisor to make these decisions.
What is one way in which your team is unique, and why will that be an advantage during the competition?
Our team has pushed on advancing hands-off autonomy, so our robotic fleet can operate independently in the worst case scenario: a communication-denied environment. The lack of wireless communication is relatively prevalent in subterranean search and rescue missions, and therefore we expect DARPA will be stressing this part of the challenge in the SubT Final. Our autonomy solution is designed in such a way that it can operate autonomously both with and without communication back to the Human Supervisor. When we are in communication with our robotic teammates, the Human Supervisor has the ability to provide several high level commands to assist the robots in making better decisions.
Team Robotika
Team Robotika
Robotika
Country
Czech Republic, USA, Switzerland
Members
Robotika International, Czech Republic and United States
Robotika.cz, Czech Republic
Czech University of Life Science, Czech Republic
Centre for Field Robotics, Czech Republic
Cogito Team, Switzerland
Robots
Two wheeled robots
Follow Team
Website
Twitter
Q&A: Team Lead Martin Dlouhy
How have you been preparing for the SubT Final?
Our team participates in both Systems and Virtual tracks. We were using the virtual environment to develop and test our ideas and techniques and once they were sufficiently validated in the virtual world, we would transfer these results to the Systems track as well. Then, to validate this transfer, we visited a few underground spaces (mostly caves) with our physical robots to see how they perform in the real world.
What do you think the biggest challenge of the SubT Final will be?
Besides the usual challenges inherent to the underground spaces (mud, moisture, fog, condensation), we also noticed the unusual configuration of the starting point which is a sharp downhill slope. Our solution is designed to be careful about going on too steep slopes so our concern is that as things stand, the robots may hesitate to even get started. We are making some adjustments in the remaining time to account for this. Also, unlike the environment in all the previous rounds, the Mega Cavern features some really large open spaces. Our solution is designed to expect detection of obstacles somewhere in the vicinity of the robot at any given point so the concern is that a large open space may confuse its navigational system. We are looking into handling such a situation better as well.
What is one way in which your team is unique, and why will that be an advantage during the competition?
It appears that we are unique in bringing only two robots into the Finals. We have brought more into the earlier rounds to test different platforms and ultimately picked the two we are fielding this time as best suited for the expected environment. A potential benefit for us is that supervising only two robots could be easier and perhaps more efficient than managing larger numbers. Continue reading
#439743 Video Friday: Preparing for the SubT ...
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USAWeRobot 2021 – September 23-25, 2021 – [Online Event]IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAWearRAcon Europe 2021 – October 5-7, 2021 – [Online Event]ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USALet us know if you have suggestions for next week, and enjoy today's videos.
Team Explorer, the SubT Challenge entry from CMU and Oregon State University, is in the last stage of preparation for the competition this month inside the Mega Caverns cave complex in Louisville, Kentucky.
[ Explorer ]
Team CERBERUS is looking good for the SubT Final next week, too.
Autonomous subterranean exploration with the ANYmal C Robot inside the Hagerbach underground mine
[ ARL ]
I'm still as skeptical as I ever was about a big and almost certainly expensive two-armed robot that can do whatever you can program it to do (have fun with that) and seems to rely on an app store for functionality.
[ Unlimited Robotics ]
Project Mineral is using breakthroughs in artificial intelligence, sensors, and robotics to find ways to grow more food, more sustainably.
[ Mineral ]
Not having a torso or anything presumably makes this easier.
Next up, Digit limbo!
[ Hybrid Robotics ]
Paric completed layout of a 500 unit apartment complex utilizing the Dusty FieldPrinter solution. Autonomous layout on the plywood deck saved weeks worth of schedule, allowing the panelized walls to be placed sooner.
[ Dusty Robotics ]
Spot performs inspection in the Kidd Creek Mine, enabling operators to keep their distance from hazards.
[ Boston Dynamics ]
Digit's engineered to be a multipurpose machine. Meaning, it needs to be able to perform a collection of tasks in practically any environment. We do this by first ensuring the robot's physically capable. Then we help the robot perceive its surroundings, understand its surroundings, then reason a best course of action to navigate its environment and accomplish its task. This is where software comes into play. This is early AI in action.
[ Agility Robotics ]
This work proposes a compact robotic limb, AugLimb, that can augment our body functions and support the daily activities. The proposed device can be mounted on the user's upper arm, and transform into compact state without obstruction to wearers.
[ AugLimb ]
Ahold Delhaize and AIRLab need the help of academics who have knowledge of human-robot interactions, mobility, manipulation, programming, and sensors to accelerate the introduction of robotics in retail. In the AIRLab Stacking challenge, teams will work on algorithms that focus on smart retail applications, for example, automated product stacking.
[ PAL Robotics ]
Leica, not at all well known for making robots, is getting into the robotic reality capture business with a payload for Spot and a new drone.
Introducing BLK2FLY: Autonomous Flying Laser Scanner
[ Leica BLK ]
As much as I like Soft Robotics, I'm maybe not quite as optimistic as they are about the potential for robots to take over quite this much from humans in the near term.
[ Soft Robotics ]
Over the course of this video, the robot gets longer and longer and longer.
[ Transcend Robotics ]
This is a good challenge: attach a spool of electrical tape to your drone, which can unpredictably unspool itself and make sure it doesn't totally screw you up.
[ UZH ]
Two interesting short seminars from NCCR Robotics, including one on autonomous racing drones and “neophobic” mobile robots.
Dario Mantegazza: Neophobic Mobile Robots Avoid Potential Hazards
[ NCCR ]
This panel on Synergies between Automation and Robotics comes from ICRA 2021, and once you see the participant list, I bet you'll agree that it's worth a watch.
[ ICRA 2021 ]
CMU RI Seminars are back! This week we hear from Andrew E. Johnson, a Principal Robotics Systems Engineer in the Guidance and Control Section of the NASA Jet Propulsion Laboratory, on “The Search for Ancient Life on Mars Began with a Safe Landing.”
Prior mars rover missions have all landed in flat and smooth regions, but for the Mars 2020 mission, which is seeking signs of ancient life, this was no longer acceptable. Terrain relief that is ideal for the science obviously poses significant risks for landing, so a new landing capability called Terrain Relative Navigation (TRN) was added to the mission. This talk will describe the scientific goals of the mission, the Terrain Relative Navigation system design and the successful results from landing on February 18th, 2021.[ CMU RI Seminar ] Continue reading