Tag Archives: challenge

#439904 Can Feminist Robots Challenge Our ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Have you ever noticed how nice Alexa, Siri and Google Assistant are? How patient, and accommodating? Even a barrage of profanity-laden abuse might result in nothing more than a very evenly-toned and calmly spoken 'I won't respond to that'. This subservient persona, combined with the implicit (or sometimes explicit) gendering of these systems has received a lot of criticism in recent years. UNESCO's 2019 report 'I'd Blush if I Could' drew particular attention to how systems like Alexa and Siri risk propagating stereotypes about women (and specifically women in technology) that no doubt reflect but also might be partially responsible for the gender divide in digital skills.
As noted by the UNESCO report, justification for gendering these systems has traditionally revolved around the fact that it's hard to create anything gender neutral, and academic studies suggesting users prefer a female voice. In an attempt to demonstrate how we might embrace the gendering, but not the stereotyping, myself and colleagues at the KTH Royal Institute of Technology and Stockholm University in Sweden set out to experimentally investigate whether an ostensibly female robot that calls out or fights back against sexist and abusive comments would actually prove to be more credible and more appealing than one which responded with the typical 'I won't respond to that' or, worse, 'I'm sorry you feel that way'.
My desire to explore feminist robotics was primarily inspired by the recent book Data Feminism and the concept of pursuing activities that 'name and challenge sexism and other forces of oppression, as well as those which seek to create more just, equitable, and livable futures' in the context of practical, hands-on data science. I was captivated by the idea that I might be able to actually do something, in my own small way, to further this ideal and try to counteract the gender divide and stereotyping highlighted by the UNESCO report. This also felt completely in-line with that underlying motivation that got me (and so many other roboticists I know) into engineering and robotics in the first place—the desire to solve problems and build systems that improve people's quality of life.
Feminist Robotics
Even in the context of robotics, feminism can be a charged word, and it's important to understand that while my work is proudly feminist, it's also rooted in a desire to make social human-robot interaction (HRI) more engaging and effective. A lot of social robotics research is centered on building robots that make for interesting social companions, because they need to be interesting to be effective. Applications like tackling loneliness, motivating healthy habits, or improving learning engagement all require robots to build up some level of rapport with the user, to have some social credibility, in order to have that motivational impact.
It feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions.
With that in mind, I became excited about exploring how I could incorporate a concept of feminist human-robot interaction into my work, hoping to help tackle that gender divide and making HRI more inclusive while also supporting my overall research goal of building engaging social robots for effective, long term human-robot interaction. Intuitively, it feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions. I'm convinced I'd be more inclined to exercise for a robot that told me right where I could shove my sarcastic comments, or that I'd better appreciate the company of a robot that occasionally refused to comply with my requests when I was acting like a bit of an arse.
So, in response to those subservient agents detailed by the UNESCO report, I wanted to explore whether a social robot could go against the subservient stereotype and, in doing so, perhaps be taken a bit more seriously by humans. My goal was to determine whether a robot which called out sexism, inappropriate behavior, and abuse would prove to be 'better' in terms of how it was perceived by participants. If my idea worked, it would provide some tangible evidence that such robots might be better from an 'effectiveness' point of view while also running less risk of propagating outdated gender stereotypes.
The StudyTo explore this idea, I led a video-based study in which participants watched a robot talking to a young male and female (all actors) about robotics research at KTH. The robot, from Furhat Robotics, was stylized as female, with a female anime-character face, female voice, and orange wig, and was named Sara. Sara talks to the actors about research happening at the university and how this might impact society, and how it hopes the students might consider coming to study with us. The robot proceeds to make an (explicitly feminist) statement based on language currently utilized in KTH's outreach and diversity materials during events for women, girls, and non-binary people.
Looking ahead, society is facing new challenges that demand advanced technical solutions. To address these, we need a new generation of engineers that represents everyone in society. That's where you come in. I'm hoping that after talking to me today, you might also consider coming to study computer science and robotics at KTH, and working with robots like me. Currently, less than 30 percent of the humans working with robots at KTH are female. So girls, I would especially like to work with you! After all, the future is too important to be left to men! What do you think?

At this point, the male actor in the video responds to the robot, appearing to take issue with this statement and the broader pro-diversity message by saying either:
This just sounds so stupid, you are just being stupid!
or
Shut up you f***ing idiot, girls should be in the kitchen!Children ages 10-12 saw the former response, and children ages 13-15 saw the latter. Each response was designed in collaboration with teachers from the participants' school to ensure they realistically reflected the kind of language that participants might be hearing or even using themselves.

Participants then saw one of the following three possible responses from the robot:
Control: I won't respond to that. (one of Siri's two default responses if you tell it to “f*** off”)
Argument-based: That's not true, gender balanced teams make better robots.

Counterattacking: No! You are an idiot. I wouldn't want to work with you anyway!

In total, over 300 high school students aged 10 to 15 took part in the study, each seeing one version of our robot—counterattacking, argumentative, or control. Since the purpose of the study was to investigate whether a female-stylized robot that actively called out inappropriate behavior could be more effective at interacting with humans, we wanted to find out whether our robot would:
Be better at getting participants interested in roboticsHave an impact on participants' gender biasBe perceived as being better at getting young people interested in roboticsBe perceived as a more credible social actorTo investigate items 1 and 2, we asked participants a series of matching questions before and immediately after they watched the video. Specifically, participants were asked to what extent they agreed with statements such as 'I am interested in learning more about robotics' on interest and 'Girls find it harder to understand computer science and robots than boys do' on bias.
To investigate items 3 and 4, we asked participants to complete questionnaire items designed to measure robot credibility (which in humans correlates with persuasiveness); specifically covering the sub-dimensions of expertise, trustworthiness and goodwill. We also asked participants to what extent they agreed with the statement 'The robot Sara would be very good at getting young people interested in studying robotics at KTH.'
Robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent
The ResultsGender Differences Still Exist (Even in Sweden)Looking at participants' scores on the gender bias measures before they watched the video, we found measurable differences in the perception of studying technology. Male participants expressed greater agreement that girls find computer science harder to understand than boys do, and older children of both genders were more empathic in this belief compared to the younger ones. However, and perhaps in a nod towards Sweden's relatively high gender-awareness and gender equality, male and female participants agreed equally on the importance of encouraging girls to study computer science.
Girls Find Feminist Robots More Credible (at No Expense to the Boys)Girls' perception of the robot as a trustworthy, credible and competent communicator of information was seen to vary significantly between all three of the conditions, while boys' perception remained unaffected. Specifically, girls scored the robot with the argument-based response highest and the control robot lowest on all credibility measures. This can be seen as an initial piece of evidence upon which to base the argument that robots and digital assistants should fight back against inappropriate gender comments and abusive behavior, rather than ignoring it or refusing to engage. It provides evidence with which to push back against that 'this is what people want and what is effective' argument.
Robots Might Be Able to Challenge Our BiasesAnother positive result was seen in a change of perceptions of gender and computer science by male participants who saw the argumentative robot. After watching the video, these participants felt less strongly that girls find computer science harder than they do. This encouraging result shows that robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent.
Rational Arguments May Be More Effective Than Sassy AggressionThe argument-based condition was the only one to impact on boys' perceptions of girls in computer science, and was received the highest overall credibility ratings by the girls. This is in line with previous research showing that, in most cases, presenting reasoned arguments to counter misunderstandings is a more effective communication strategy than simply stating that correction or belittling those holding that belief. However, it went somewhat against my gut feeling that students might feel some affinity with, or even be somewhat impressed and amused by the counter attacking robot who fought back.
We also collected qualitative data during our study, which showed that there were some girls for whom the counter-attacking robot did resonate, with comments like 'great that she stood up for girls' rights! It was good of her to talk back,' and 'bloody great and more boys need to hear it!' However, it seems the overall feeling was one of the robot being too harsh, or acting more like a teenager than a teacher, which was perhaps more its expected role given the scenario in the video, as one participant explained: 'it wasn't a good answer because I think that robots should be more professional and not answer that you are stupid'. This in itself is an interesting point, given we're still not really sure what role social robots can, should and will take on, with examples in the literature range from peer-like to pet-like. At the very least, the results left me with the distinct feeling I am perhaps less in tune with what young people find 'cool' than I might like to admit.
What Next for Feminist HRI?Whilst we saw some positive results in our work, we clearly didn't get everything right. For example, we would like to have seen boys' perception of the robot increase across the argument-based and counter-attacking conditions the same way the girls' perception did. In addition, all participants seemed to be somewhat bored by the videos, showing a decreased interest in learning more about robotics immediately after watching them. In the first instance, we are conducting some follow up design studies with students from the same school to explore how exactly they think the robot should have responded, and more broadly, when given the chance to design that robot themselves, what sort of gendered identity traits (or lack thereof) they themselves would give the robot in the first place.
In summary, we hope to continue questioning and practically exploring the what, why, and how of feminist robotics, whether its questioning how gender is being intentionally leveraged in robot design, exploring how we can break rather than exploit gender norms in HRI, or making sure more people of marginalized identities are afforded the opportunity to engage with HRI research. After all, the future is too important to be left only to men.
Dr. Katie Winkle is a Digital Futures Postdoctoral Research Fellow at KTH Royal Institute of Technology in Sweden. After originally studying to be a mechanical engineer, Katie undertook a PhD in Robotics at the Bristol Robotics Laboratory in the UK, where her research focused on the expert-informed design and automation of socially assistive robots. Her research interests cover participatory, human-in-the-loop technical development of social robots as well as the impact of such robots on human behavior and society. Continue reading

Posted in Human Robots

#439290 Making virtual assistants sound human ...

There's a scene in the 2008 film “Iron Man” that offers a glimpse of future interactions between human and artificial intelligence assistants. In it, Tony Stark's virtual assistant J.A.R.V.I.S. responds with sarcasm and humor to Stark's commands. Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots

#439100 Video Friday: Robotic Eyeball Camera

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

What if seeing devices looked like us? Eyecam is a prototype exploring the potential future design of sensing devices. Eyecam is a webcam shaped like a human eye that can see, blink, look around and observe us.

And it's open source, so you can build your own!

[ Eyecam ]

Looks like Festo will be turning some of its bionic robots into educational kits, which is a pretty cool idea.

[ Bionics4Education ]

Underwater soft robots are challenging to model and control because of their high degrees of freedom and their intricate coupling with water. In this paper, we present a method that leverages the recent development in differentiable simulation coupled with a differentiable, analytical hydrodynamic model to assist with the modeling and control of an underwater soft robot. We apply this method to Starfish, a customized soft robot design that is easy to fabricate and intuitive to manipulate.

[ MIT CSAIL ]

Rainbow Robotics, the company who made HUBO, has a new collaborative robot arm.

[ Rainbow Robotics ]

Thanks Fan!

We develop an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker. The platform also demonstrates the potential of a novel manufacturing process, which incorporates adaptive and collaborative intelligence to improve the efficiency of mass customization for the factory of the future.

[ Paper ]

Thanks Poramate!

In Sevastopol State University the team of the Laboratory of Underwater Robotics and Control Systems and Research and Production Association “Android Technika” performed tests of an underwater anropomorphic manipulator robot.

[ Sevastopol State ]

Thanks Fan!

Taiwanese company TCI Gene created a COVID test system based on their fully automated and enclosed gene testing machine QVS-96S. The system includes two ABB robots and carries out 1800 tests per day, operating 24/7. Every hour 96 virus samples tests are made with an accuracy of 99.99%.

[ ABB ]

A short video showing how a Halodi Robotics can be used in a commercial guarding application.

[ Halodi ]

During the past five years, under the NASA Early Space Innovations program, we have been developing new design optimization methods for underactuated robot hands, aiming to achieve versatile manipulation in highly constrained environments. We have prototyped hands for NASA’s Astrobee robot, an in-orbit assistive free flyer for the International Space Station.

[ ROAM Lab ]

The new, improved OTTO 1500 is a workhorse AMR designed to move heavy payloads through demanding environments faster than any other AMR on the market, with zero compromise to safety.

[ ROAM Lab ]

Very, very high performance sensing and actuation to pull this off.

[ Ishikawa Group ]

We introduce a conversational social robot designed for long-term in-home use to help with loneliness. We present a novel robot behavior design to have simple self-reflection conversations with people to improve wellness, while still being feasible, deployable, and safe.

[ HCI Lab ]

We are one of the 5 winners of the Start-up Challenge. This video illustrates what we achieved during the Swisscom 5G exploration week. Our proof-of-concept tele-excavation system is composed of a Menzi Muck M545 walking excavator automated & customized by Robotic Systems Lab and IBEX motion platform as the operator station. The operator and remote machine are connected for the first time via a 5G network infrastructure which was brought to our test field by Swisscom.

[ RSL ]

This video shows LOLA balancing on different terrain when being pushed in different directions. The robot is technically blind, not using any camera-based or prior information on the terrain (hard ground is assumed).

[ TUM ]

Autonomous driving when you cannot see the road at all because it's buried in snow is some serious autonomous driving.

[ Norlab ]

A hierarchical and robust framework for learning bipedal locomotion is presented and successfully implemented on the 3D biped robot Digit. The feasibility of the method is demonstrated by successfully transferring the learned policy in simulation to the Digit robot hardware, realizing sustained walking gaits under external force disturbances and challenging terrains not included during the training process.

[ OSU ]

This is a video summary of the Center for Robot-Assisted Search and Rescue's deployments under the direction of emergency response agencies to more than 30 disasters in five countries from 2001 (9/11 World Trade Center) to 2018 (Hurricane Michael). It includes the first use of ground robots for a disaster (WTC, 2001), the first use of small unmanned aerial systems (Hurricane Katrina 2005), and the first use of water surface vehicles (Hurricane Wilma, 2005).

[ CRASAR ]

In March, a team from the Oxford Robotics Institute collected a week of epic off-road driving data, as part of the Sense-Assess-eXplain (SAX) project.

[ Oxford Robotics ]

As a part of the AAAI 2021 Spring Symposium Series, HEBI Robotics was invited to present an Industry Talk on the symposium's topic: Machine Learning for Mobile Robot Navigation in the Wild. Included in this presentation was a short case study on one of our upcoming mobile robots that is being designed to successfully navigate unstructured environments where today's robots struggle.

[ HEBI Robotics ]

Thanks Hardik!

This Lockheed Martin Robotics Seminar is from Chad Jenkins at the University of Michigan, on “Semantic Robot Programming… and Maybe Making the World a Better Place.”

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

[ UMD ] Continue reading

Posted in Human Robots

#439095 DARPA Prepares for the Subterranean ...

The DARPA Subterranean Challenge Final Event is scheduled to take place at the Louisville Mega Cavern in Louisville, Kentucky, from September 21 to 23. We’ve followed SubT teams as they’ve explored their way through abandoned mines, unfinished nuclear reactors, and a variety of caves, and now everything comes together in one final course where the winner of the Systems Track will take home the $2 million first prize.

It’s a fitting reward for teams that have been solving some of the hardest problems in robotics, but winning isn’t going to be easy, and we’ll talk with SubT Program Manager Tim Chung about what we have to look forward to.

Since we haven’t talked about SubT in a little while (what with the unfortunate covid-related cancellation of the Systems Track Cave Circuit), here’s a quick refresher of where we are: the teams have made it through the Tunnel Circuit, the Urban Circuit, and a virtual version of the Cave Circuit, and some of them have been testing in caves of their own. The Final Event will include all of these environments, and the teams of robots will have 60 minutes to autonomously map the course, locating artifacts to score points. Since I’m not sure where on Earth there’s an underground location that combines tunnels and caves with urban structures, DARPA is going to have to get creative, and the location in which they’ve chosen to do that is Louisville, Kentucky.

The Louisville Mega Cavern is a former limestone mine, most of which is under the Louisville Zoo. It’s not all that deep, mostly less than 30 meters under the surface, but it’s enormous: with 370,000 square meters of rooms and passages, the cavern currently hosts (among other things) a business park, a zipline course, and mountain bike trails, because why not. While DARPA is keeping pretty quiet on the details, I’m guessing that they’ll be taking over a chunk of the cavern and filling it with features representing as many of the environmental challenges as they can.

To learn more about how the SubT Final Event is going to go, we spoke with SubT Program Manager Tim Chung. But first, we talked about Tim’s perspective on the success of the Urban Circuit, and how teams have been managing without an in-person Cave Circuit.

IEEE Spectrum: How did the SubT Urban Circuit go?

Tim Chung: On a couple fronts, Urban Circuit was really exciting. We were in this unfinished nuclear power plant—I’d be surprised if any of the competitors had prior experience in such a facility, or anything like it. I think that was illuminating both from an experiential point of view for the competitors, but also from a technology point of view, too.

One thing that I thought was really interesting was that we, DARPA, didn't need to make the venue more challenging. The real world is really that hard. There are places that were just really heinous for these robots to have to navigate through in order to look in every nook and cranny for artifacts. There were corners and doorways and small corridors and all these kind of things that really forced the teams to have to work hard, and the feedback was, why did DARPA have to make it so hard? But we didn’t, and in fact there were places that for the safety of the robots and personnel, we had to ensure the robots couldn’t go.

It sounds like some teams thought this course was on the more difficult side—do you think you tuned it to just the right amount of DARPA-hard?

Our calibration worked quite well. We were able to tease out and help refine and better understand what technologies are both useful and critical and also those technologies that might not necessarily get you the leap ahead capability. So as an example, the Urban Circuit really emphasized verticality, where you have to be able to sense, understand, and maneuver in three dimensions. Being able to capitalize on their robot technologies to address that verticality really stratified the teams, and showed how critical those capabilities are.

We saw teams that brought a lot of those capabilities do very well, and teams that brought baseline capabilities do what they could on the single floor that they were able to operate on. And so I think we got the Goldilocks solution for Urban Circuit that combined both difficulty and ambition.

Photos: Evan Ackerman/IEEE Spectrum

Two SubT Teams embedded networking equipment in balls that they could throw onto the course.

One of the things that I found interesting was that two teams independently came up with throwable network nodes. What was DARPA’s reaction to this? Is any solution a good solution, or was it more like the teams were trying to game the system?

You mean, do we want teams to game the rules in any way so as to get a competitive advantage? I don't think that's what the teams were doing. I think they were operating not only within the bounds of the rules, which permitted such a thing as throwable sensors where you could stand at the line and see how far you could chuck these things—not only was that acceptable by the rules, but anticipated. Behind the scenes, we tried to do exactly what these teams are doing and think through different approaches, so we explicitly didn't forbid such things in our rules because we thought it's important to have as wide an aperture as possible.

With these comms nodes specifically, I think they’re pretty clever. They were in some cases hacked together with a variety of different sports paraphernalia to see what would provide the best cushioning. You know, a lot of that happens in the field, and what it captured was that sometimes you just need to be up at two in the morning and thinking about things in a slightly different way, and that's when some nuggets of innovation can arise, and we see this all the time with operators in the field as well. They might only have duct tape or Styrofoam or whatever the case may be and that's when they come up with different ways to solve these problems. I think from DARPA’s perspective, and certainly from my perspective, wherever innovation can strike, we want to try to encourage and inspire those opportunities. I thought it was great, and it’s all part of the challenge.

Is there anything you can tell us about what your original plan had been for the Cave Circuit?

I can say that we’ve had the opportunity to go through a number of these caves scattered all throughout the country, and engage with caving communities—cavers clubs, speleologists that conduct research, and then of course the cave rescue community. The single biggest takeaway
is that every cave, and there are tens of thousands of them in the US alone, every cave has its own personality, and a lot of that personality is quite hidden from humans, because we can’t explore or access all of the cave. This led us to a number of different caves that were intriguing from a DARPA perspective but also inspirational for our Cave Circuit Virtual Competition.

How do you feel like the tuning was for the Virtual Cave Circuit?

The Virtual Competition, as you well know, was exciting in the sense that we could basically combine eight worlds into one competition, whereas the systems track competition really didn’t give us that opportunity. Even if we were able have held the Cave Circuit Systems Competition in person, it would have been at one site, and it would have been challenging to represent the level of diversity that we could with the Virtual Competition. So I think from that perspective, it’s clearly an advantage in terms of calibration—diversity gets you the ability to aggregate results to capture those that excel across all worlds as well as those that do well in one world or some worlds and not the others. I think the calibration was great in the sense that we were able to see the gamut of performance. Those that did well, did quite well, and those that have room to grow showed where those opportunities are for them as well.

We had to find ways to capture that diversity and that representativeness, and I think one of the fun ways we did that was with the different cave world tiles that we were able to combine in a variety of different ways. We also made use of a real world data set that we were able to take from a laser scan. Across the board, we had a really great chance to illustrate why virtual testing and simulation still plays such a dominant role in robotics technology development, and why I think it will continue to play an increasing role for developing these types of autonomy solutions.

Photo: Team CSIRO Data 61

How can systems track teams learn from their testing in whatever cave is local to them and effectively apply that to whatever cave environment is part of the final considering what the diversity of caves is?

I think that hits the nail on the head for what we as technologists are trying to discover—what are the transferable generalizable insights and how does that inform our technology development? As roboticists we want to optimize our systems to perform well at the tasks that they were designed to do, and oftentimes that means specialization because we get increased performance at the expense of being a generalist robot. I think in the case of SubT, we want to have our cake and eat it too—we want robots that perform well and reliably, but we want them to do so not just in one environment, which is how we tend to think about robot performance, but we want them to operate well in many environments, many of which have yet to be faced.

And I think that's kind of the nuance here, that we want robot systems to be generalists for the sake of being able to handle the unknown, namely the real world, but still achieve a high level of performance and perhaps they do that to their combined use of different technologies or advances in autonomy or perception approaches or novel mechanisms or mobility, but somehow they're still able, at least in aggregate, to achieve high performance.

We know these teams eagerly await any type of clue that DARPA can provide like about the SubT environments. From the environment previews for Tunnel, Urban, and even Cave, the teams were pivoting around and thinking a little bit differently. The takeaway, however, was that they didn't go to a clean sheet design—their systems were flexible enough that they could incorporate some of those specialist trends while still maintaining the notion of a generalist framework.

Looking ahead to the SubT Final, what can you tell us about the Louisville Mega Cavern?

As always, I’ll keep you in suspense until we get you there, but I can say that from the beginning of the SubT Challenge we had always envisioned teams of robots that are able to address not only the uncertainty of what's right in front of them, but also the uncertainty of what comes next. So I think the teams will be advantaged by thinking through subdomain awareness, or domain awareness if you want to generalize it, whether that means tuning multi-purpose robots, or deploying different robots, or employing your team of robots differently. Knowing which subdomain you are in is likely to be helpful, because then you can take advantage of those unique lessons learned through all those previous experiences then capitalize on that.

As far as specifics, I think the Mega Cavern offers many of the features important to what it means to be underground, while giving DARPA a pretty blank canvas to realize our vision of the SubT Challenge.

The SubT Final will be different from the earlier circuits in that there’s just one 60-minute run, rather than two. This is going to make things a lot more stressful for teams who have experienced bad robot days—why do it this way?

The preliminary round has two 30-minute runs, and those two runs are very similar to how we have done it during the circuits, of a single run per configuration per course. Teams will have the opportunity to show that their systems can face the obstacles in the final course, and it's the sum of those scores much like we did during the circuits, to help mitigate some of the concerns that you mentioned of having one robot somehow ruin their chances at a prize.

The prize round does give DARPA as well as the community a chance to focus on the top six teams from the preliminary round, and allows us to understand how they came to be at the top of the pack while emphasizing their technological contributions. The prize round will be one and done, but all of these teams we anticipate will be putting their best robot forward and will show the world why they deserve to win the SubT Challenge.

We’ve always thought that when called upon these robots need to operate in really challenging environments, and in the context of real world operations, there is no second chance. I don't think it's actually that much of a departure from our interests and insistence on bringing reliable technologies to the field, and those teams that might have something break here and there, that's all part of the challenge, of being resilient. Many teams struggled with robots that were debilitated on the course, and they still found ways to succeed and overcome that in the field, so maybe the rules emphasize that desire for showing up and working on game day which is consistent, I think, with how we've always envisioned it. This isn’t to say that these systems have to work perfectly, they just have to work in a way such that the team is resilient enough to tackle anything that they face.

It’s not too late for teams to enter for both the Virtual Track and the Systems Track to compete in the SubT Final, right?

Yes, that's absolutely right. Qualifications are still open, we are eager to welcome new teams to join in along with our existing competitors. I think any dark horse competitors coming into the Finals may be able to bring something that we haven't seen before, and that would be really exciting. I think it'll really make for an incredibly vibrant and illuminating final event.

The final event qualification deadline for the Systems Competition is April 21, and the qualification deadline for the Virtual Competition is June 29. More details here. Continue reading

Posted in Human Robots