Tag Archives: software
#437592 Coordinated Robotics Wins DARPA SubT ...
DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.
Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.
Huge congratulations to Coordinated Robotics! It’s worth pointing out that the top three teams were separated by an incredibly small handful of points, and on a slightly different day, with slightly different artifact positions, any of them could have come out on top. This doesn’t diminish Coordinated Robotics’ victory in the least—it means that the competition was fierce, and that the problem of autonomous cave exploration with robots has been solved (virtually, at least) in several different but effective ways.
We know Coordinated Robotics pretty well at this point, but here’s an introduction video:
You heard that right—Coordinated Robotics is just Kevin Knoedler, all by himself. This would be astonishing, if we weren’t already familiar with Kevin’s abilities: He won NASA’s virtual Space Robotics Challenge by himself in 2017, and Coordinated Robotics placed first in the DARPA SubT Virtual Tunnel Circuit and second in the Virtual Urban Circuit. We asked Kevin how he managed to do so spectacularly well (again), and here’s what he told us:
IEEE Spectrum: Can you describe what it was like to watch your team of robots on the live stream, and to see them score the most points?
Kevin Knoedler: It was exciting and stressful watching the live stream. It was exciting as the top few scores were quite close for the cave circuit. It was stressful because I started out behind and worked my way up, but did not do well on the final world. Luckily, not doing well on the first and last worlds was offset by better scores on many of the runs in between. DARPA did a very nice job with their live stream of the cave circuit results.
How did you decide on the makeup of your team, and on what sensors to use?
To decide on the makeup of the team I experimented with quite a few different vehicles. I had a lot of trouble with the X2 and other small ground vehicles flipping over. Based on that I looked at the larger ground vehicles that also had a sensor capable of identifying drop-offs. The vehicles that met those criteria for me were the Marble HD2, Marble Husky, Ozbot ATR, and the Absolem. Of those ground vehicles I went with the Marble HD2. It had a downward looking depth camera that I could use to detect drop-offs and was much more stable on the varied terrain than the X2. I had used the X3 aerial vehicle before and so that was my first choice for an aerial platform.
What were some things that you learned in Tunnel and Urban that you were able to incorporate into your strategy for Cave?
In the Tunnel circuit I had learned a strategy to use ground vehicles and in the Urban circuit I had learned a strategy to use aerial vehicles. At a high level that was the biggest thing I learned from the previous circuits that I was able to apply to the Cave circuit. At a lower level I was able to apply many of the development and testing strategies from the previous circuits to the Cave circuit.
What aspect of the cave environment was most challenging for your robots?
I would say it wasn't just one aspect of the cave environment that was challenging for the robots. There were quite a few challenging aspects of the cave environment. For the ground vehicles there were frequently paths that looked good as the robot started on the path, but turned into drop-offs or difficult boulder crawls. While it was fun to see the robot plan well enough to slowly execute paths over the boulders, I was wishing that the robot was smart enough to try a different path rather than wasting so much time crawling over the large boulders. For the aerial vehicles the combination of tight paths along with large vertical spaces was the biggest challenge in the environment. The large open vertical areas were particularly challenging for my aerial robots. They could easily lose track of their position without enough nearby features to track and it was challenging to find the correct path in and out of such large vertical areas.
How will you be preparing for the SubT Final?
To prepare for the SubT Final the vehicles will be getting a lot smarter. The ground vehicles will be better at navigation and communicating with one another. The aerial vehicles will be better able to handle large vertical areas both from a positioning and a planning point of view. Finally, all of the vehicles will do a better job coordinating what areas have been explored and what areas have good leads for further exploration.
Image: DARPA
The final score for the DARPA SubT Cave Circuit virtual competition.
We also had a chance to ask SubT program manager Tim Chung a few questions at yesterday’s post-event press conference, about the course itself and what he thinks teams should have learned from the competition:
IEEE Spectrum: Having looked through some real caves, can you give some examples of some of the most significant differences between this simulation and real caves? And with the enormous variety of caves out there, how generalizable are the solutions that teams came up with?
Tim Chung: Many of the caves that I’ve had to crawl through and gotten bumps and scrapes from had a couple of different features that I’ll highlight. The first is the variations in moisture— a lot of these caves were naturally formed with streams and such, so many of the caves we went to had significant mud, flowing water, and such. And so one of the things we're not capturing in the SubT simulator is explicitly anything that would submerge the robots, or otherwise short any of their systems. So from that perspective, that's one difference that's certainly notable.
And then the other difference I think is the granularity of the terrain, whether it's rubble, sand, or just raw dirt, friction coefficients are all across the board, and I think that's one of the things that any terrestrial simulator will both struggle with and potentially benefit from— that is, terramechanics simulation abilities. Given the emphasis on mobility in the SubT simulation, we’re capturing just a sliver of the complexity of terramechanics, but I think that's probably another take away that you'll certainly see— where there’s that distinction between physical and virtual technologies.
To answer your second question about generalizability— that’s the multi-million dollar question! It’s definitely at the crux of why we have eight diverse worlds, both in size verticality, dimensions, constraint passageways, etc. But this is eight out of countless variations, and the goal of course is to be able to investigate what those key dependencies are. What I'll say is that the out of the seventy three different virtual cave tiles, which are the building blocks that make up these virtual worlds, quite a number of them were not only inspired by real world caves, but were specifically designed so that we can essentially use these tiles as unit tests going forward. So, if I want to simulate vertical inclines, here are the tiles that are the vertical vertical unit tests for robots, and that’s how we’re trying to to think through how to tease out that generalizability factor.
What are some observations from this event that you think systems track teams should pay attention to as they prepare for the final event?
One of the key things about the virtual competition is that you submit your software, and that's it. So you have to design everything from state management to failure mode triage, really thinking about what could go wrong and then building out your autonomous capabilities either to react to some of those conditions, or to anticipate them. And to be honest I think that the humans in the loop that we have in the systems competition really are key enablers of their capability, but also could someday (if not already) be a crutch that we might not be able to develop.
Thinking through some of the failure modes in a fully autonomous software deployed setting are going to be incredibly valuable for the systems competitors, so that for example the human supervisor doesn't have to worry about those failure modes as much, or can respond in a more supervisory way rather than trying to joystick the robot around. I think that's going to be one of the greatest impacts, thinking through what it means to send these robots off to autonomously get you the information you need and complete the mission
This isn’t to say that the humans aren't going to be useful and continue to play a role of course, but I think this shifting of the role of the human supervisor from being a state manager to being more of a tactical commander will dramatically highlight the impact of the virtual side on the systems side.
What, if anything, should we take away from one person teams being able to do so consistently well in the virtual circuit?
It’s a really interesting question. I think part of it has to do with systems integration versus software integration. There's something to be said for the richness of the technologies that can be developed, and how many people it requires to be able to develop some of those technologies. With the systems competitors, having one person try to build, manage, deploy, service, and operate all of those robots is still functionally quite challenging, whereas in the virtual competition, it really is a software deployment more than anything else. And so I think the commonality of single person teams may just be a virtue of the virtual competition not having some of those person-intensive requirements.
In terms of their strong performance, I give credit to all of these really talented folks who are taking upon themselves to jump into the competitor pool and see how well they do, and I think that just goes to show you that whether you're one person or ten people people or a hundred people on a team, a good idea translated and executed well really goes a long way.
Looking ahead, teams have a year to prepare for the final event, which is still scheduled to be held sometime in fall 2021. And even though there was no cave event for systems track teams, the fact that the final event will be a combination of tunnel, urban, and cave circuits means that systems track teams have been figuring out how to get their robots to work in caves anyway, and we’ll be bringing you some of their stories over the next few weeks.
[ DARPA SubT ] Continue reading →
#437571 Video Friday: Snugglebot Is What We All ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
IROS 2020 – October 25-25, 2020 – [Online]
Robotica 2020 – November 10-14, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.
Snugglebot is what we all need right now.
[ Snugglebot ]
In his video message on his prayer intention for November, Pope Francis emphasizes that progress in robotics and artificial intelligence (AI) be oriented “towards respecting the dignity of the person and of Creation”.
[ Vatican News ]
KaPOW!
Apparently it's supposed to do that—the disruptor flies off backwards to reduce recoil on the robot, and has its own parachute to keep it from going too far.
[ Ghost Robotics ]
Animals have many muscles, receptors, and neurons which compose feedback loops. In this study, we designed artificial muscles, receptors, and neurons without any microprocessors, or software-based controllers. We imitate the reflexive rule observed in walking experiments of cats, as a result, the Pneumatic Brainless Robot II emerged running motion (a leg trajectory and a gait pattern) through the interaction between the body, the ground, and the artificial reflexes. We envision that the simple reflex circuit we discovered will be a candidate for a minimal model for describing the principles of animal locomotion.
Find the paper, “Brainless Running: A Quasi-quadruped Robot with Decentralized Spinal Reflexes by Solely Mechanical Devices,” on IROS On-Demand.
[ IROS ]
Thanks Yoichi!
I have no idea what these guys are saying, but they're talking about robots that serve chocolate!
The world of experience of the Zotter Schokoladen Manufaktur of managing director Josef Zotter counts more than 270,000 visitors annually. Since March 2019, this world of chocolate in Bergl near Riegersburg in Austria has been enriched by a new attraction: the world's first chocolate and praline robot from KUKA delights young and old alike and serves up chocolate and pralines to guests according to their personal taste.
[ Zotter ]
This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. The solution properly handles the challenging situations where the intent of the target and the dense environments are unknown to the UAV. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions. Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.
[ FAST Lab ]
Southwest Research Institute developed a cable management system for collaborative robotics, or “cobots.” Dress packs used on cobots can create problems when cables are too tight (e-stops) or loose (tangling). SwRI developed ADDRESS, or the Adaptive DRESing System, to provide smarter cobot dress packs that address e-stops and tangling.
[ SWRI ]
A quick demonstration of the acoustic contact sensor in the RBO Hand 2. An embedded microphone records the sound inside of the pneumatic finger. Depending on which part of the finger makes contact, the sound is a little bit different. We create a sensor that recognizes these small changes and predicts the contact location from the sound. The visualization on the left shows the recorded sound (top) and which of the nine contact classes the sensor is currently predicting (bottom).
[ TU Berlin ]
The MAVLab won the prize for the “most innovative design” in the IMAV 2018 indoor competition, in which drones had to fly through windows, gates, and follow a predetermined flight path. The prize was awarded for the demonstration of a fully autonomous version of the “DelFly Nimble”, a tailless flapping wing drone.
In order to fly by itself, the DelFly Nimble was equipped with a single, small camera and a small processor allowing onboard vision processing and control. The jury of international experts in the field praised the agility and autonomous flight capabilities of the DelFly Nimble.
[ MAVLab ]
A reactive walking controller for the Open Dynamic Robot Initiative's skinny quadruped.
[ ODRI ]
Mobile service robots are already able to recognize people and objects while navigating autonomously through their operating environments. But what is the ideal position of the robot to interact with a user? To solve this problem, Fraunhofer IPA developed an approach that connects navigation, 3D environment modeling, and person detection to find the optimal goal pose for HRI.
[ Fraunhofer ]
Yaskawa has been in robotics for a very, very long time.
[ Yaskawa ]
Black in Robotics IROS launch event, featuring Carlotta Berry.
[ Black in Robotics ]
What is AI? I have no idea! But these folks have some opinions.
[ MIT ]
Aerial-based Observations of Volcanic Emissions (ABOVE) is an international collaborative project that is changing the way we sample volcanic gas emissions. Harnessing recent advances in drone technology, unoccupied aerial systems (UAS) in the ABOVE fleet are able to acquire aerial measurements of volcanic gases directly from within previously inaccessible volcanic plumes. In May 2019, a team of 30 researchers undertook an ambitious field deployment to two volcanoes – Tavurvur (Rabaul) and Manam in Papua New Guinea – both amongst the most prodigious emitters of sulphur dioxide on Earth, and yet lacking any measurements of how much carbon they emit to the atmosphere.
[ ABOVE ]
A talk from IHMC's Robert Griffin for ICCAS 2020, including a few updates on their Nadia humanoid.
[ IHMC ] Continue reading →
#437564 How We Won the DARPA SubT Challenge: ...
This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.
“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.
Team BARCS joins the SubT Virtual Track
The smoke incident happened more than a year after we first learned of the DARPA Subterranean Challenge. DARPA announced SubT early in 2018, and at that time, we were interested in building internal collaborations on multi-agent autonomy problems, and SubT seemed like the perfect opportunity. Though a few of us had backgrounds in robotics, the majority of our team was new to the field. We knew that submitting a proposal as a largely non-traditional robotics team from an organization not known for research in robotics was a risk. However, the Virtual Track gave us the opportunity to focus on autonomy and multi-agent teaming strategies, areas requiring skill in asynchronous computing and sensor data processing that are strengths of our Institute. The prevalence of open source code, small inexpensive platforms, and customizable sensors has provided the opportunity for experts in fields other than robotics to apply novel approaches to robotics problems. This is precisely what makes the Virtual Track of SubT appealing to us, and since starting SubT, autonomy has developed into a significant research thrust for our Institute. Plus, robots are fun!
After many hours of research, discussion, and collaboration, we submitted our proposal early in 2018. And several months later, we found out that we had won a contract and became a funded team (Team BARCS) in the SubT Virtual Track. Now we needed to actually make our strategy work for the first SubT Tunnel Circuit competition, taking place in August of 2019.
Building a team of virtual robots
A natural approach to robotics competitions like SubT is to start with the question of “what can X-type robot do” and then build a team and strategy around individual capabilities. A particular challenge for the SubT Virtual Track is that we can’t design our own systems; instead, we have to choose from a predefined set of simulated robots and sensors that DARPA provides, based on the real robots used by Systems Track teams. Our approach is to look at what a team of robots can do together, determining experimentally what the best team configuration is for each environment. By the final competition, ideally we will be demonstrating the value of combining platforms across multiple Systems Track teams into a single Virtual Track team. Each of the robot configurations in the competition has an associated cost, and team size is constrained by a total cost. This provides another impetus for limiting dependence on complex sensor packages, though our ranging preference is 3D lidar, which is the most expensive sensor!
Image: Michigan Tech Research Institute
The teams can rely on realistic physics and sensors but they start off with no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for their simulated robots.
One of the frequent questions we receive about the Virtual Track is if it’s like a video game. While it may look similar on the surface, everything under the hood in a video game is designed to service the game narrative and play experience, not require novel research in AI and autonomy. The purpose of simulations, on the other hand, is to include full physics and sensor models (including noise and errors) to provide a testbed for prototyping and developing solutions to those real-world challenges. We are starting with realistic physics and sensors but no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for our simulated robots.
Though the simulation is more like real life than a video game, it is not real life. Due to occasional software bugs, there are still non-physical events, like the robots falling through an invisible hole in the world or driving through a rock instead of over it or flipping head over heels when driving over a tiny lip between world tiles. These glitches, while sometimes frustrating, still allow the SubT Virtual platform to be realistic enough to support rapid prototyping of controller modules that will transition straightforwardly onto hardware, closing the loop between simulation and real-world robots.
Full autonomy for DARPA-hard scenarios
The Virtual Track requirement that the robotic agents be fully autonomous, rather than have a human supervisor, is a significant distinction between the Systems and Virtual Tracks of SubT. Our solutions must be hardened against software faults caused by things like missing and bad data since our robots can’t turn to us for help. In order for a team of robots to complete this objective reliably with no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to autonomously identify and manage faults and failures anywhere in the control chain.
The communications limitations in subterranean environments (both real and virtual) mean that we need to keep the amount of information shared between robots low, while making the usability of that information for joint decision-making high. This goal has guided much of our design for autonomous navigation and joint search strategy for our team. For example, instead of sharing the full SLAM map of the environment, our agents only share a simplified graphical representation of the space, along with data about frontiers it has not yet explored, and are able to merge its information with the graphs generated by other agents. The merged graph can then be used for planning and navigation without having full knowledge of the detailed 3D map.
The Virtual Track requires that the robotic agents be fully autonomous. With no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to identify and manage faults and failures anywhere in the control chain.
Since the objective of the SubT program is to advance the state-of-the-art in rapid autonomous exploration and mapping of subterranean environments by robots, our first software design choices focused on the mapping task. The SubT virtual environments are sufficiently rich as to provide interesting problems in building so-called costmaps that accurately separate obstructions that are traversable (like ramps) from legitimately impassible obstructions. An extra complication we discovered in the first course, which took place in mining tunnels, was that the angle of the lowest beam of the lidar was parallel to the down ramps in the tunnel environment, so they could not “see” the ground (or sometimes even obstructions on the ramp) until they got close enough to the lip of the ramp to receive lidar reflections off the bottom of the ramp. In this case, we had to not only change the costmap to convince the robot that there was safe ground to reach over the lip of the ramp, but also had to change the path planner to get the robot to proceed with caution onto the top of the ramp in case there were previously unseen obstructions on the ramp.
In addition to navigation in the costmaps, the robot must be able to generate its own goals to navigate to. This is what produces exploratory behavior when there is no map to start with. SLAM is used to generate a detailed map of the environment explored by a single robot—the space it has probed with its sensors. From the sensor data, we are able to extract information about the interior space of the environment while looking for holes in the data, to determine things like whether the current tunnel continues or ends, or how many tunnels meet at an intersection. Once we have some understanding of the interior space, we can place navigation goals in that space. These goals naturally update as the robot traverses the tunnel, allowing the entire space to be explored.
Sending our robots into the virtual unknown
The solutions for the Virtual Track competitions are tested by DARPA in multiple sequestered runs across many environments for each Circuit in the month prior to the Systems Track competition. We must wait until the joint award ceremony at the conclusion of the Systems Track to find out the results, and we are completely in the dark about placings before the awards are announced. It’s nerve-wracking! The challenges of the worlds used in the Circuit events are also hand-designed, so features of the worlds we use for development could be combined in ways we have not anticipated—it’s always interesting to see what features were prioritized after the event. We test everything in our controllers well enough to feel confident that we at least are submitting something reasonably stable and broadly capable, and once the solution is in, we can’t really do anything other than “let go” and get back to work on the next phase of development. Maybe it’s somewhat like sending your kid to college: “we did our best to prepare you for this world, little bots. Go do good.”
Image: Michigan Tech Research Institute
The first SubT competition was the Tunnel Circuit, featuring a labyrinthine environment that simulated human-engineered tunnels, including hazards such as vertical shafts and rubble.
The first competition was the Tunnel Circuit, in October 2019. This environment models human-engineered tunnels. Two substantial challenges in this environment were vertical shafts and rubble. Our team accrued 21 points over 15 competition runs in five separate tunnel environments for a second place finish, behind Team Coordinated Robotics.
The next phase of the SubT virtual competition was the Urban Circuit. Much of the difference between our Tunnel and Urban Circuit results came down to thorough testing to identify failure modes and implementations of checks and data filtering for fault tolerance. For example, in the SLAM nodes run by a single robot, the coordinates of the most recent sensor data are changed multiple times during processing and integration into the current global 3D map of the “visited” environment stored by that robot. If there is lag in IMU or clock data, the observation may be temporarily registered at a default location that is very far from the actual position. Since most of our decision processes for exploration are downstream from SLAM, this can cause faulty or impossible goals to be generated, and the robots then spend inordinate amounts of time trying to drive through walls. We updated our method to add a check to see if the new map position has jumped a far distance from the prior map position, and if so, we threw that data out.
Image: Michigan Tech Research Institute
In open spaces like the rooms in the Urban circuit, we adjusted our approach to exploration through graph generation to allow the robots to accurately identify viable routes while helping to prevent forays off platform edges.
Our approach to exploration through graph generation based on identification of interior spaces allowed us to thoroughly explore the centers of rooms, although we did have to make some changes from the Tunnel circuit to achieve that. In the Tunnel circuit, we used a simplified graph of the environment based on landmarks like intersections. The advantage of this approach is that it is straightforward for two robots to compare how the graphs of the space they explored individually overlap. In open spaces like the rooms in the Urban circuit, we chose to instead use a more complex, less directly comparable graph structure based on the individual robot’s trajectory. This allowed the robots to accurately identify viable routes between features like subway station platforms and subway tracks, as well as to build up the navigation space for room interiors, while helping to prevent forays off the platform edges. Frontier information is also integrated into the graph, providing a uniform data structure for both goal selection and route planning.
The results are in!
The award ceremony for the Urban Circuit was held concurrently with the Systems Track competition awards this past February in Washington State. We sent a team representative to participate in the Technical Interchange Meeting and present the approach for our team, and the rest of us followed along from our office space on the DARPAtv live stream. While we were confident in our solution, we had also been tracking the online leaderboard and knew our competitors were going to be submitting strong solutions. Since the competition environments are hand-designed, there are always novel challenges that could be presented in these environments as well. We knew we would put up a good fight, but it was very exciting to see BARCS appear in first place!
Any time we implement a new module in our control system, there is a lot of parameter tuning that has to happen to produce reliably good autonomous behavior. In the Urban Circuit, we did not sufficiently test some parameter values in our exploration modules. The effect of this was that the robots only chose to go down small hallways after they explored everything else in their environment, which meant very often they ran out of time and missed a lot of small rooms. This may be the biggest source of lost points for us in the Urban Circuit. One of our major plans going forward from the Urban Circuit is to integrate more sophisticated node selection methods, which can help our robots more intelligently prioritize which frontier nodes to visit. By going through all three Circuit challenges, we will learn how to appropriately add weights to the frontiers based on features of the individual environments. For the Final Challenge, when all three Circuit environments will be combined into large systems, we plan to implement adaptive controllers that will identify their environments and use the appropriate optimized parameters for that environment. In this way, we expect our agents to be able to (for example) prioritize hallways and other small spaces in Urban environments, and perhaps prioritize large openings over small in the Cave environments, if the small openings end up being treacherous overall.
Next for our team: Cave Circuit
Coming up next for Team BARCS is the Virtual Cave Circuit. We are in the middle of testing our hypothesis that our controller will transition from UGVs to UAVs and developing strategies for refining our solution to handle Cave Circuit environmental hazards. The UAVs have a shorter battery life than the UGVs, so executing a joint exploration strategy will also be a high priority for this event, as will completing our work on graph sharing and merging, which will give our robot teams more sophisticated options for navigation and teamwork. We’re reaching a threshold in development where we can start increasing the “smarts” of the robots, which we anticipate will be critical for the final competition, where all of the challenges of SubT will be combined to push the limits of innovation. The Cave Circuit will also have new environmental challenges to tackle: dynamic features such as rock falls have been added, which will block previously accessible passages in the cave environment. We think our controllers are well-poised to handle this new challenge, and we’re eager to find out if that’s the case.
As of now, the biggest worries for us are time and team composition. The Cave Circuit deadline has been postponed to October 15 due to COVID-19 delays, with the award ceremony in mid-November, but there have also been several very compelling additions to the testbed that we would like to experiment with before submission, including droppable networking ‘breadcrumbs’ and new simulated platforms. There are design trade-offs when balancing general versus specialist approaches to the controllers for these robots—since we are adding UAVs to our team for the first time, there are new decisions that will have to be made. For example, the UAVs can ascend into vertical spaces, but only have a battery life of 20 minutes. The UGVs by contrast have 90 minute battery life. One of our strategies is to do an early return to base with one or more agents to buy down risk on making any artifact reports at all for the run, hedging against our other robots not making it back in time, a lesson learned from the Tunnel Circuit. Should a UAV take on this role, or is it better to have them explore deeper into the environment and instead report their artifacts to a UGV or network node, which comes with its own risks? Testing and experimentation to determine the best options takes time, which is always a worry when preparing for a competition! We also anticipate new competitors and stiffer competition all around.
Image: Michigan Tech Research Institute
Team BARCS has now a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021.
Going forward from the Cave Circuit, we will have a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. What we are most excited about is increasing the level of intelligence of the agents in their teamwork and joint exploration of the environment. Since we will have (hopefully) built up robust approaches to handling each of the specific types of environments in the Tunnel, Urban, and Cave circuits, we will be aiming to push the limits on collaboration and efficiency among the agents in our team. We view this as a central research contribution of the Virtual Track to the Subterranean Challenge because intelligent, adaptive, multi-robot collaboration is an upcoming stage of development for integration of robots into our lives.
The Subterranean Challenge Virtual Track gives us a bridge for transitioning our more abstract research ideas and algorithms relevant to this degree of autonomy and collaboration onto physical systems, and exploring the tangible outcomes of implementing our work in the real world. And the next time there’s an incident in the basement of our building, the robots (and humans) of Team BARCS will be ready to respond.
Richard Chase, Ph.D., P.E., is a research scientist at Michigan Tech Research Institute (MTRI) and has 20 years of experience developing robotics and cyber physical systems in areas from remote sensing to autonomous vehicles. At MTRI, he works on a variety of topics such as swarm autonomy, human-swarm teaming, and autonomous vehicles. His research interests are the intersection of design, robotics, and embedded systems.
Sarah Kitchen is a Ph.D. mathematician working as a research scientist and an AI/Robotics focus area leader at MTRI. Her research interests include intelligent autonomous agents and multi-agent collaborative teams, as well as applications of autonomous robots to sensing systems.
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001118C0124 and is released under Distribution Statement (Approved for Public Release, Distribution Unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Continue reading →
#437543 This Is How We’ll Engineer Artificial ...
Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.”
Answer: “What is the human hand?”
Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel.
So why can’t robots do the same?
In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicon.
Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand.
Here’s what Sundaram laid out to get us to that future.
How Does Touch Work, Anyway?
Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain.
The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.”
At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in.
The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance.
It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different.
In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object.
Biology to Machine
Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia engineered a “feeling” robotic finger using overlapping light emitters and sensors in a way loosely similar to receptor fields. Distortions in light were then analyzed with deep learning to translate into contact location and force.
Although a radical departure from our own electrical-based system, the Columbia team’s attempt was clearly based on human biology. They’re not alone. “Substantial progress is being made in the creation of soft, stretchable electronic skins,” said Sundaram, many of which can sense forces or pressure, although they’re currently still limited.
What’s promising, however, is the “exciting progress in using visual data,” said Sundaram. Computer vision has gained enormously from ubiquitous cameras and large datasets, making it possible to train powerful but data-hungry algorithms such as deep convolutional neural networks (CNNs).
By piggybacking on their success, we can essentially add “eyes” to robotic hands, a superpower us humans can’t imagine. Even better, CNNs and other classes of algorithms can be readily adopted for processing tactile data. Together, a robotic hand could use its eyes to scan an object, plan its movements for grasp, and use touch for feedback to adjust its grip. Maybe we’ll finally have a robot that easily rescues the phone sadly dropped into a composting toilet. Or something much grander to benefit humanity.
That said, relying too heavily on vision could also be a downfall. Take a robot that scans a wide area of rubble for signs of life during a disaster response. If touch relies on sight, then it would have to keep a continuous line-of-sight in a complex and dynamic setting—something computer vision doesn’t do well in, at least for now.
A Neuromorphic Way Forward
Too Debbie Downer? I got your back! It’s hard to overstate the challenges, but what’s clear is that emerging machine learning tools can tackle data processing challenges. For vision, it’s distilling complex images into “actionable control policies,” said Sundaram. For touch, it’s easy to imagine the same. Couple the two together, and that’s a robotic super-hand in the making.
Going forward, argues Sundaram, we need to closely adhere to how the hand and brain process touch. Hijacking our biological “touch machinery” has already proved useful. In 2019, one team used a nerve-machine interface for amputees to control a robotic arm—the DEKA LUKE arm—and sense what the limb and attached hand were feeling. Pressure on the LUKE arm and hand activated an implanted neural interface, which zapped remaining nerves in a way that the brain processes as touch. When the AI analyzed pressure data similar to biological tactile neurons, the person was able to better identify different objects with their eyes closed.
“Neuromorphic tactile hardware (and software) advances will strongly influence the future of bionic prostheses—a compelling application of robotic hands,” said Sundaram, adding that the next step is to increase the density of sensors.
Two additional themes made the list of progressing towards a cyborg future. One is longevity, in that sensors on a robot need to be able to reliably produce large quantities of high-quality data—something that’s seemingly mundane, but is a practical limitation.
The other is going all-in-one. Rather than just a pressure sensor, we need something that captures the myriad of touch sensations. From feather-light to a heavy punch, from vibrations to temperatures, a tree-like architecture similar to our hands would help organize, integrate, and otherwise process data collected from those sensors.
Just a decade ago, mind-controlled robotics were considered a blue sky, stretch-goal neurotechnological fantasy. We now have a chance to “close the loop,” from thought to movement to touch and back to thought, and make some badass robots along the way.
Image Credit: PublicDomainPictures from Pixabay Continue reading →
#437504 A New and Improved Burger Robot’s on ...
No doubt about it, the pandemic has changed the way we eat. Never before have so many people who hated cooking been forced to learn how to prepare a basic meal for themselves. With sit-down restaurants limiting their capacity or shutting down altogether, consumption of fast food and fast-casual food has skyrocketed. Don’t feel like slaving over a hot stove? Just hit the drive through and grab a sandwich and some fries (the health implications of increased fast food consumption are another matter…).
Given our sudden immense need for paper-wrapped burgers and cardboard cartons of fries, fast food workers are now counted as essential. But what about their safety, both from a virus standpoint and from the usual risks of working in a busy kitchen (like getting burned by the stove or the hot oil from the fryer, cut by a slicer, etc.)? And how many orders of burgers and fries can humans possibly churn out in an hour?
Enter the robot. Three and a half years ago, a burger-flipping robot aptly named Flippy, made by Miso Robotics, made its debut at a fast food restaurant in California called CaliBurger. Now Flippy is on the market for anyone who wishes to purchase their own, with a price tag of $30,000 and a range of new capabilities—this burger bot has progressed far beyond just flipping burgers.
Flippy’s first iteration was already pretty impressive. It used machine learning software to locate and identify objects in front of it (rather than needing to have objects lined up in specific spots), and was able to learn from experience to improve its accuracy. Sensors on its grill-facing side took in thermal and 3D data to gauge the cooking process for multiple patties at a time, and cameras allowed the robot to ‘see’ its surroundings.
A system that digitally sent tickets to the kitchen from the restaurant’s front counter kept Flippy on top of how many burgers it should be cooking at any given time. Its key tasks were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.
The new and improved Flippy can do all this and more. It can cook 19 different foods, including chicken wings, onion rings, french fries, and even the Impossible Burger (which, as you may know, isn’t actually made of meat, and that means it’s a little trickier to grill it to perfection).
Flippy’s handiwork. Image Credit: Miso Robotics
And instead of its body sitting on a cart on wheels (which took up a lot of space and meant the robot’s arm could get in the way of human employees), it’s now attached to a rail along the stove’s hood, and can move along the rail to access both the grill and the fryer (provided they’re next to each other, which in many fast food restaurants they are). In fact, Flippy has a new acronym attached to its name: ROAR, which stands for Robot on a Rail.
Flippy ROAR in action, artist rendering. Image Credit: Miso Robotics
Sensors equipped with laser make it safer for human employees to work near Flippy. The bot can automatically switch between different tools, such as a spatula for flipping patties and tongs for gripping the handle of a fryer basket. Its AI software will enable it to learn new skills over time.
Flippy’s interface. Image Credit: Miso Robotics
The first big restaurant chain to go all-in on Flippy was White Castle, which in July announced plans to pilot Flippy ROAR before year’s end. And just last month, Miso made the bot commercially available. The current cost is $30,000 (plus a monthly fee of $1,500 for use of the software), but the company hopes to bring the price down to $20,000 within the next year.
According to Business Insider, demand for the fast food robot is through the roof, probably given a significant boost by the pandemic—thanks, Covid-19. The pace of automation has picked up across multiple sectors, and will likely continue to accelerate as companies look to insure themselves against additional losses.
So for the immediate future, it seems that no matter what happens, we don’t have to worry about the supply of burgers, fries, onion rings, chicken wings, and the like running out.
Now if only Flippy had a cousin—perhaps named Leafy—who could chop vegetables and greens and put together fresh-made salads…
Maybe that can be Miso Robotics’ next project.
Image Credit: Miso Robotics Continue reading →