Tag Archives: the

#439798 This Week’s Awesome Tech Stories From ...

ROBOTICS
How DeepMind Is Reinventing the Robot
Tom Chivers | IEEE Spectrum
“Having conquered Go and protein folding, the company turns to a really hard problem. …To get to the next level, researchers are trying to fuse AI and robotics to create an intelligence that can make decisions and control a physical body in the messy, unpredictable, and unforgiving real world.”

NANOTECH
Microscopic Metavehicles Are Pushed and Steered by Light
Ben Coxworth | New Atlas
“Although solar-powered devices are now fairly common, Swedish scientists have created something a little different. They’ve built tiny ‘metavehicles’ that are mechanically propelled and guided via waves of light. …[It’s] hoped that the technology may someday be utilized in applications such as moving micro-particles through solutions inside of or adjacent to cells.”

3D PRINTING
How an 11-Foot-Tall 3D Printer Is Helping to Create a Community
Debra Kamin | The New York Times
“When New Story broke ground on the village in 2019, it was called the world’s first community of 3D printed homes. Two years and a pandemic later, 200 homes are either under construction or have been completed, 10 of which were printed on site by Icon’s Vulcan II printer. Plans for roads, a soccer field, a school, a market and a library are in the works.”

ARTIFICIAL INTELLIGENCE
Why OpenAI’s Codex Won’t Replace Coders
Thomas Smith | IEEE Spectrum
“If you’re a software developer yourself—or your company has spent tons of money hiring them—you can breathe easy. Codex won’t replace human developers any time soon, though it may make them far more powerful, efficient, and focused.”

FUTURE
Humans Can’t Be the Sole Keepers of Scientific Knowledge
Iulia Georgescu | Wired
“It’s clear that we do not really know what we know, because nobody can read the entire literature even in their own narrow field (which includes, in addition to journal articles, PhD theses, lab notes, slides, white papers, technical notes, and reports). …To solve this problem we need to make science papers not only machine-readable but machine-understandable, by (re)writing them in a special type of programming language. In other words: Teach science to machines in the language they understand.”

SCIENCE FICTION
Dune Foresaw—and Influenced—Half a Century of Global Conflict
Andy Greenberg | Source
“…reading Dune a half century later, when many of Herbert’s environmental and psychological ideas have either blended into the mainstream or gone out of style—and in the wake of the disastrous fall of the US-backed government in Afghanistan after a 20-year war—it’s hard not to be struck, instead, by the book’s focus on human conflict: an intricate, deeply detailed world of factions relentlessly vying for power and advantage by exploiting every tool available to them.”

SPACE
Space Policy Is Finally Moving Into the 21st Century
Tatyana Woodall | MIT Technology Review
“This week, the United Nations Institute for Disarmament Research held its annual Outer Space Security Conference in Geneva, Switzerland (participants had the option to attend virtually or in person). For two days, diplomats, researchers, and military officials from around the world met to discuss threats and challenges, arms control, and space security. Their conversations provided a window into what new space policies might look like.”

Image Credit: 光曦 刘 / Unsplash Continue reading

Posted in Human Robots

#439773 How the U.S. Army Is Turning Robots Into ...

This article is part of our special report on AI, “The Great AI Reckoning.”

“I should probably not be standing this close,” I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named
RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to “go clear a path.” It's then up to the robot to make all the decisions necessary to achieve that objective.

The ability to make decisions autonomously is not just what makes robots useful, it's what makes robots
robots. We value robots for their ability to sense what's going on around them, make decisions based on that information, and then take useful actions without our input. In the past, robotic decision making followed highly structured rules—if you sense this, then do that. In structured environments like factories, this works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

RoMan, along with many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networks—a computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a wide variety of semistructured data that had previously been very difficult for computers running rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks with multiple layers of abstraction, this technique is called deep learning.

Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally different from the way humans see the world. It's often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that difference—the “black box” opacity of deep learning—poses a potential problem for robots like RoMan and for the Army Research Lab.

In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

This opacity means that robots that rely on deep learning have to be used carefully. A deep-learning system is good at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do best when their applications are well defined and narrow in scope. “When you have well-structured inputs and outputs, and you can encapsulate your problem in that kind of relationship, I think deep learning does very well,” says
Tom Howard, who directs the University of Rochester's Robotics and Artificial Intelligence Laboratory and has developed natural-language interaction algorithms for RoMan and other ground robots. “The question when programming an intelligent robot is, at what practical size do those deep-learning building blocks exist?” Howard explains that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much more significant when that behavior is manifested through a 170-kilogram two-armed military robot.

After a couple of minutes, RoMan hasn't moved—it's still sitting there, pondering the tree branch, arms poised like a praying mantis. For the last 10 years, the Army Research Lab's Robotics Collaborative Technology Alliance (RCTA) has been working with roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other top research institutions to develop robot autonomy for use in future ground-combat vehicles. RoMan is one part of that process.

The “go clear a path” task that RoMan is slowly thinking through is difficult for a robot because the task is so abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grasp them and what kind of manipulation technique might be best to apply (like pushing, pulling, or lifting), and then make it happen. That's a lot of steps and a lot of unknowns for a robot with a limited understanding of the world.

This limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. “The Army can be called upon to operate basically anywhere in the world. We do not have a mechanism for collecting data in all the different domains in which we might be operating. We may be deployed to some unknown forest on the other side of the world, but we'll be expected to perform just as well as we would in our own backyard,” he says. Most deep-learning systems function reliably only within the domains and environments in which they've been trained. Even if the domain is something like “every drivable road in San Francisco,” the robot will do fine, because that's a data set that has already been collected. But, Stump says, that's not an option for the military. If an Army deep-learning system doesn't perform well, they can't simply solve the problem by collecting more data.

ARL's robots also need to have a broad awareness of what they're doing. “In a standard operations order for a mission, you have goals, constraints, a paragraph on the commander's intent—basically a narrative of the purpose of the mission—which provides contextual info that humans can interpret and gives them the structure for when they need to make decisions and when they need to improvise,” Stump explains. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly, depending on the mission's broader objectives. That's a big ask for even the most advanced robot. “I can't think of a deep-learning approach that can deal with this kind of information,” Stump says.

Robots at the Army Research Lab test autonomous navigation techniques in rough terrain [top, middle] with the goal of being able to keep up with their human teammates. ARL is also developing robots with manipulation capabilities [bottom] that can interact with objects so that humans don't have to.Evan Ackerman

While I watch, RoMan is reset for a second try at branch removal. ARL's approach to autonomy is modular, where deep learning is combined with other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two different ways of identifying objects from 3D sensor data: UPenn's approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional database of 3D models. Perception through search works only if you know exactly which objects you're looking for in advance, but training is much faster since you need only a single model per object. It can also be more accurate when perception of the object is difficult—if the object is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run simultaneously and compete against each other.

Perception is one of the things that deep learning tends to excel at. “The computer vision community has made crazy progress using deep learning for this stuff,” says Maggie Wigness, a computer scientist at ARL. “We've had good success with some of these models that were trained in one environment generalizing to a new environment, and we intend to keep using deep learning for these sorts of tasks, because it's the state of the art.”

ARL's modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be created or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when you're not necessarily sure what optimal behavior looks like. This is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. “When we deploy these robots, things can change very quickly,” Wigness says. “So we wanted a technique where we could have a soldier intervene, and with just a few examples from a user in the field, we can update the system if we need a new behavior.” A deep-learning technique would require “a lot more data and time,” she says.

It's not just data-sparse problems and fast adaptation that deep learning struggles with. There are also questions of robustness, explainability, and safety. “These questions aren't unique to the military,” says Stump, “but it's especially important when we're talking about systems that may incorporate lethality.” To be clear, ARL is not currently working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military more broadly, which means considering ways in which such systems may be used in the future.

The requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.

Safety is an obvious priority, and yet there isn't a clear way of making a deep-learning system verifiably safe, according to Stump. “Doing deep learning with safety constraints is a major research effort. It's hard to add those constraints into the system, because you don't know where the constraints already in the system came from. So when the mission changes, or the context changes, it's hard to deal with that. It's not even a data question; it's an architecture question.” ARL's modular architecture, whether it's a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using different techniques that are more verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. “If other information comes in and changes what we need to do, there's a hierarchy there,” Stump says. “It all happens in a rational way.”

Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as “somewhat of a rabble-rouser” due to his skepticism of some of the claims made about the power of deep learning, agrees with the ARL roboticists that deep-learning approaches often can't handle the kinds of challenges that the Army has to be prepared for. “The Army is always entering new environments, and the adversary is always going to be trying to change the environment so that the training process the robots went through simply won't match what they're seeing,” Roy says. “So the requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.”

Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a useful technology when applied to problems with clear functional relationships, but when you start looking at abstract concepts, it's not clear whether deep learning is a viable approach. “I'm very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning,” Roy says. “I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not believe that we understand how to do that yet.” Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It's harder to combine those two networks into one larger network that detects red cars than it would be if you were using a symbolic reasoning system based on structured rules with logical relationships. “Lots of people are working on this, but I haven't seen a real success that drives abstract reasoning of this kind.”

For the foreseeable future, ARL is making sure that its autonomous systems are safe and robust by keeping humans around for both higher-level reasoning and occasional low-level advice. Humans might not be directly in the loop at all times, but the idea is that humans and robots are more effective when working together as a team. When the most recent phase of the Robotics Collaborative Technology Alliance program began in 2009, Stump says, “we'd already had many years of being in Iraq and Afghanistan, where robots were often used as tools. We've been trying to figure out what we can do to transition robots from tools to acting more as teammates within the squad.”

RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn't have any fundamental knowledge about what a tree branch actually is, and this lack of world knowledge (what we think of as common sense) is a fundamental problem with autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan's job much easier. And indeed, this time RoMan manages to successfully grasp the branch and noisily haul it across the room.

Turning a robot into a good teammate can be difficult, because it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in special situations like explosive-ordnance disposal but is otherwise not efficient. Too much autonomy and you'd start to have issues with trust, safety, and explainability.

“I think the level that we're looking for here is for robots to operate on the level of working dogs,” explains Stump. “They understand exactly what we need them to do in limited circumstances, they have a small amount of flexibility and creativity if they are faced with novel circumstances, but we don't expect them to do creative problem-solving. And if they need help, they fall back on us.”

RoMan is not likely to find itself out in the field on a mission anytime soon, even as part of a team with humans. It's very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first in autonomous driving, and later in more complex robotic systems that could include mobile manipulators like RoMan. APPL combines different machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. That allows high-level goals and constraints to be applied on top of lower-level programming. Humans can use teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots adjust to new environments, while the robots can use unsupervised reinforcement learning to adjust their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. With APPL, a learning-based system like RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if it ends up in an environment that's too different from what it trained on.

It's tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, “there are lots of hard problems, but industry's hard problems are different from the Army's hard problems.” The Army doesn't have the luxury of operating its robots in structured environments with lots of data, which is why ARL has put so much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a key part of the autonomous framework that ARL is developing. “That's what we're trying to build with our robotics systems,” Stump says. “That's our bumper sticker: 'From tools to teammates.' ”

This article appears in the October 2021 print issue as “Deep Learning Goes to Boot Camp.”

Special Report: The Great AI Reckoning

READ NEXT:
7 Revealing Ways AIs Fail

Or see the full report for more articles on the future of AI. Continue reading

Posted in Human Robots

#439753 DARPA SubT Finals: Meet the Teams

This is it! This week, we're at the DARPA SubTerranean Challenge Finals in Louisville KY, where more than two dozen Systems Track and Virtual Track teams will compete for millions of dollars in prize money and being able to say “we won a DARPA challenge,” which is of course priceless.

We've been following SubT for years, from Tunnel Circuit to Urban Circuit to Cave (non-) Circuit. For a recent recap, have a look at this post-cave pre-final article that includes an interview with SubT Program Manager Tim Chung, but if you don't have time for that, the TLDR is that this week we're looking at both a Virtual Track as well as a Systems Track with physical robots on a real course. The Systems Track teams spent Monday checking in at the Louisville Mega Cavern competition site, and we asked each team to tell us about how they've been preparing, what they think will be most challenging, and what makes them unique.

Team CERBERUS

Team CERBERUS

CERBERUS

Country

USA, Switzerland, United Kingdom, Norway

Members

University of Nevada, Reno

ETH Zurich, Switzerland

University of California, Berkeley

Sierra Nevada Corporation

Flyability, Switzerland

Oxford Robotics Institute, United Kingdom

Norwegian University for Science and Technology (NTNU), Norway

Robots

TBA

Follow Team

Website

@CerberusSubt

Q&A: Team Lead Kostas Alexis

How have you been preparing for the SubT Final?

First of all this year's preparation was strongly influenced by Covid-19 as our team spans multiple countries, namely the US, Switzerland, Norway, and the UK. Despite the challenges, we leveled up both our weekly shake-out events and ran a 2-month team-wide integration and testing activity in Switzerland during July and August with multiple tests in diverse underground settings including multiple mines. Note that we bring a brand new set of 4 ANYmal C robots and a new generation of collision-tolerant flying robots so during this period we further built new hardware.

What do you think the biggest challenge of the SubT Final will be?

We are excited to see how the combination of vastly large spaces available in Mega Caverns can be combined with very narrow cross-sections as DARPA promises and vertical structures. We think that terrain with steep slopes and other obstacles, complex 3D geometries, as well as the dynamic obstacles will be the core challenges.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Our team coined early on the idea of legged and flying robot combination. We have remained focused on this core vision of ours and also bring fully own-developed hardware for both legged and flying systems. This is both our advantage and – in a way – our limitation as we spend a lot of time in its development. We are fully excited about the potential we see developing and we are optimistic that this will be demonstrated in the Final Event!

Team Coordinated Robotics

Team Coordinated Robotics

Coordinated Robotics

Country

USA

Members

California State University Channel Islands

Oke Onwuka

Sequoia Middle School

Robots

TBA

Q&A: Team Lead Kevin Knoedler

How have you been preparing for the SubT Final?

Coordinated Robotics has been preparing for the SubT Final with lots of testing on our team of robots. We have been running them inside, outside, day, night and all of the circumstances that we can come up with. In Kentucky we have been busy updating all of the robots to the same standard and repairing bits of shipping damage before the Subt Final.

What do you think the biggest challenge of the SubT Final will be?

The biggest challenge for us will be pulling all of the robots together to work as a team and make sure that everything is communicating together. We did not have lab access until late July and so we had robots at individuals homes, but were generally only testing one robot at a time.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Coordinated Robotics is unique in a couple of different ways. We are one of only two unfunded teams so we take a lower budget approach to solving lots of the issues and that helps us to have some creative solutions. We are also unique in that we will be bringing a lot of robots (23) so that problems with individual robots can be tolerated as the team of robots continues to search.

Team CoSTAR

Team CoSTAR

CoSTAR

Country

USA, South Korea, Sweden

Members

Jet Propulsion Laboratory

California Institute of Technology

Massachusetts Institute of Technology

KAIST, South Korea

Lulea University of Technology, Sweden

Robots

TBA

Follow Team

Website

Q&A: Caltech Team Lead Joel Burdick

How have you been preparing for the SubT Final?

Since May, the team has made 4 trips to a limestone cave near Lexington Kentucky (and they are just finishing a week-long “game” there yesterday). Since February, parts or all of the team have been testing 2-3 days a week in a section of the abandoned Subway system in downtown Los Angeles.

What do you think the biggest challenge of the SubT Final will be?

That will be a tough one to answer in advance. The expected CoSTAR-specific challenges are of course the complexity of the test-site that DARPA has prepared, fatigue of the team, and the usual last-minute hardware failures: we had to have an entire new set of batteries for all of our communication nodes FedExed to us yesterday. More generally, we expect the other teams to be well prepared. Speaking only for myself, I think there will be 4-5 teams that could easily win this competition.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Previously, our team was unique with our Boston Dynamic legged mobility. We've heard that other teams maybe using Spot quadrupeds as well. So, that may no longer be a uniqueness. We shall see! More importantly, we believe our team is unique in the breadth of the participants (university team members from U.S., Europe, and Asia). Kind of like the old British empire: the sun never sets on the geographic expanse of Team CoSTAR.

Team CSIRO Data61

Team CSIRO Data61

CSIRO Data61

Country

Australia, USA

Members

Commonwealth Scientific and Industrial Research Organisation, Australia

Emesent, Australia

Georgia Institute of Technology

Robots

TBA

Follow Team

Website

Twitter

Q&A: SubT Principal Investigator Navinda Kottege

How have you been preparing for the SubT Final?

Test, test, test. We've been testing as often as we can, simulating the competition conditions as best we can. We're very fortunate to have an extensive site here at our CSIRO lab in Brisbane that has enabled us to construct quite varied tests for our full fleet of robots. We have also done a number of offsite tests as well.

After going through the initial phases, we have converged on a good combination of platforms for our fleet. Our work horse platform from the Tunnel circuit has been the BIA5 ATR tracked robot. We have recently added Boston Dynamics Spot quadrupeds to our fleet and we are quite happy with their performance and the level of integration with our perception and navigation stack. We also have custom designed Subterra Navi drones from Emesent. Our fleet consists of two of each of these three platform types. We have also designed and built a new 'Smart node' for communication with the Rajant nodes. These are dropped from the tracked robots and automatically deploy after a delay by extending out ground plates and antennae. As described above, we have been doing extensive integration testing with the full system to shake out bugs and make improvements.

What do you think the biggest challenge of the SubT Final will be?

The biggest challenge is the unknown. It is always a learning process to discover how the robots respond to new classes of obstacle; responding to this on the fly in a new environment is extremely challenging. Given the format of two preliminary runs and one prize run, there is little to no margin for error compared to previous circuit events where there were multiple runs that contributed to the final score. Any significant damage to robots during the preliminary runs would be difficult to recover from to perform in the final run.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Our fleet uses a common sensing, mapping and navigation system across all robots, built around our Wildcat SLAM technology. This is what enables coordination between robots, and provides the accuracy required to locate detected objects. This had allowed us to easily integrate different robot platforms into our fleet. We believe this 'homogenous sensing on heterogenous platforms' paradigm gives us a unique advantage in reducing overall complexity of the development effort for the fleet and also allowing us to scale our fleet as needed. Having excellent partners in Emesent and Georgia Tech and having their full commitment and support is also a strong advantage for us.

Team CTU-CRAS-NORLAB

Team CTU-CRAS-NORLAB

CTU-CRAS-NORLAB

Country

Czech Republic, Canada

Members

Czech Technological University, Czech Republic

Université Laval, Canada

Robots

TBA

Follow Team

Website
Twitter

Q&A: Team Lead Tomas Svoboda

How have you been preparing for the SubT Final?

We spent most of the time preparing new platforms as we made a significant technology update. We tested the locomotion and autonomy of the new platforms in Bull Rock Cave, one of the largest caves in Czechia. We also deployed the robots in an old underground fortress to examine the system in an urban-like underground environment. The very last weeks were, however, dedicated to integration tests and system tuning.

What do you think the biggest challenge of the SubT Final will be?

Hard to say, but regarding the expected environment, the vertical shafts might be the most challenging since they are not easy to access to test and tune the system experimentally. They would also add challenges to communication.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Not sure about the other teams, but we plan to deploy all kinds of ground vehicles, tracked, wheeled, and legged platforms accompanied by several drones. We hope the diversity of the platform types would be beneficial for adapting to the possible diversity of terrains and underground challenges. Besides, we also hope the tuned communication would provide access to robots in a wider range than the last time. Optimistically, we might keep all robots connected to the communication infrastructure built during the mission, albeit the bandwidth is very limited, but should be sufficient for artifacts reporting and high-level switching of the robots' goals and autonomous behavior.

Team Explorer

Team Explorer

Explorer

Country

USA

Members

Carnegie Mellon University

Oregon State University

Robots

TBA

Follow Team

Website
Facebook

Q&A: Team Co-Lead Sebastian Scherer

How have you been preparing for the SubT Final?

Since we expect DARPA to have some surprises on the course for us, we have been practicing in a wide range of different courses around Pittsburgh including an abandoned hospital complex, a cave and limestone and coal mines. As the finals approached, we were practicing at these locations nearly daily, with debrief and debugging sessions afterward. This has helped us find the advantages of each of the platforms, ways of controlling them, and the different sensor modalities.

What do you think the biggest challenge of the SubT Final will be?

For our team the biggest challenges are steep slopes for the ground robots and thin loose obstacles that can get sucked into the props for the drones as well as narrow passages.

What is one way in which your team is unique, and why will that be an advantage during the competition?

We have developed a heterogeneous team for SubT exploration. This gives us an advantage since there is not a single platform that is optimal for all SubT environments. Tunnels are optimal for roving robots, urban environments for walking robots, and caves for flying. Our ground robots and drones are custom-designed for navigation in rough terrain and tight spaces. This gives us an advantage since we can get to places not reachable by off-the-shelf platforms.

Team MARBLE

Team MARBLE

MARBLE

Country

USA

Members

University of Colorado, Boulder

University of Colorado, Denver

Scientific Systems Company, Inc.

University of California, Santa Cruz

Robots

TBA

Follow Team

Twitter

Q&A: Project Engineer Gene Rush

How have you been preparing for the SubT Final?

Our team has worked tirelessly over the past several months as we prepare for the SubT Final. We have invested most of our time and energy in real-world field deployments, which help us in two major ways. First, it allows us to repeatedly test the performance of our full autonomy stack, and second, it provides us the opportunity to emphasize Pit Crew and Human Supervisor training. Our PI, Sean Humbert, has always said “practice, practice, practice.” In the month leading up to the event, we stayed true to this advice by holding 10 deployments across a variety of environments, including parking garages, campus buildings at the University of Colorado Boulder, and the Edgar Experimental Mine.

What do you think the biggest challenge of the SubT Final will be?

I expect the most difficult challenge will is centered around autonomous high-level decision making. Of course, mobility challenges, including treacherous terrain, stairs, and drop offs will certainly test the physical capabilities of our mobile robots. However, the scale of the environment is so great, and time so limited, that rapidly identifying the areas that likely have human survivors is vitally important and a very difficult open challenge. I expect most teams, ours included, will utilize the intuition of the Human Supervisor to make these decisions.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Our team has pushed on advancing hands-off autonomy, so our robotic fleet can operate independently in the worst case scenario: a communication-denied environment. The lack of wireless communication is relatively prevalent in subterranean search and rescue missions, and therefore we expect DARPA will be stressing this part of the challenge in the SubT Final. Our autonomy solution is designed in such a way that it can operate autonomously both with and without communication back to the Human Supervisor. When we are in communication with our robotic teammates, the Human Supervisor has the ability to provide several high level commands to assist the robots in making better decisions.

Team Robotika

Team Robotika

Robotika

Country

Czech Republic, USA, Switzerland

Members

Robotika International, Czech Republic and United States

Robotika.cz, Czech Republic

Czech University of Life Science, Czech Republic

Centre for Field Robotics, Czech Republic

Cogito Team, Switzerland

Robots

Two wheeled robots

Follow Team

Website
Twitter

Q&A: Team Lead Martin Dlouhy

How have you been preparing for the SubT Final?

Our team participates in both Systems and Virtual tracks. We were using the virtual environment to develop and test our ideas and techniques and once they were sufficiently validated in the virtual world, we would transfer these results to the Systems track as well. Then, to validate this transfer, we visited a few underground spaces (mostly caves) with our physical robots to see how they perform in the real world.

What do you think the biggest challenge of the SubT Final will be?

Besides the usual challenges inherent to the underground spaces (mud, moisture, fog, condensation), we also noticed the unusual configuration of the starting point which is a sharp downhill slope. Our solution is designed to be careful about going on too steep slopes so our concern is that as things stand, the robots may hesitate to even get started. We are making some adjustments in the remaining time to account for this. Also, unlike the environment in all the previous rounds, the Mega Cavern features some really large open spaces. Our solution is designed to expect detection of obstacles somewhere in the vicinity of the robot at any given point so the concern is that a large open space may confuse its navigational system. We are looking into handling such a situation better as well.

What is one way in which your team is unique, and why will that be an advantage during the competition?

It appears that we are unique in bringing only two robots into the Finals. We have brought more into the earlier rounds to test different platforms and ultimately picked the two we are fielding this time as best suited for the expected environment. A potential benefit for us is that supervising only two robots could be easier and perhaps more efficient than managing larger numbers. Continue reading

Posted in Human Robots

#439748 This Week’s Awesome Tech Stories From ...

BIOTECH
A New Company With a Wild Mission: Bring Back the Woolly Mammoth
Carl Zimmer | The New York Times
“A team of scientists and entrepreneurs announced on Monday that they have started a new company to genetically resurrect the woolly mammoth. The company, named Colossal, aims to place thousands of these magnificent beasts back on the Siberian tundra, thousands of years after they went extinct.”

TECH
Alphabet’s Project Taara Laser Tech Beamed 700TB of Data Across Nearly 5km
Richard Lawler | The Verge
“Sort of like fiber optic cables without the cable, FSOC can create a 20Gbps+ broadband link from two points that have a clear line of sight, and Alphabet’s moonshot lab X has built up Project Taara to give it a shot. They started by setting up links in India a few years ago as well as a few pilots in Kenya, and today X revealed what it has achieved by using its wireless optical link to connect service across the Congo River from Brazzaville in the Republic of Congo and Kinshasa in the Democratic Republic of Congo.”

TRANSPORTATION
EV Startup Lucid’s First Car Can Travel 520 Miles on a Full Battery—Beating Tesla by 115 Miles
Tim Levin | Business Insider
“When Lucid Motors’ hotly anticipated first cars reach customers later this year, they’ll become the longest-range electric vehicles on the road. …The startup’s debut sedan, the Air Dream Edition R, has earned a range rating of 520 miles from the Environmental Protection Agency. It’s the longest range rating the agency has ever awarded.”

ENERGY
Self-Sustaining Solar House on Wheels Wants To Soak up the Sun
Doug Johnson | Ars Technica
“The vehicle has the aerodynamic tear-drop shape of other solar-powered vehicles and sports a series of solar panels on its roof. However, it also has additional roofing that slides up when stationary, making it easier to stand inside to cook or sleep. …To showcase its creation, Solar Team Eindhoven will begin to drive the vehicle 3,000 kilometers from Eindhoven to the southern tip of Spain this Sunday.”

SYNTHETIC BIOLOGY
Biology Starts to Get a Technological Makeover
Steve Lohr | The New York Times
“Proponents of synthetic biology say the field could reprogram biology to increase food production, fight disease, generate energy and purify water. The realization of that potential lies decades in the future, if at all. But it is no longer the stuff of pure science fiction because of advances in recent years in biology, computing, automation and artificial intelligence.”

TRANSPORTATION
Michelin’s Airless Passenger Car Tires Get Their First Public Outing
Loz Blain | New Atlas
“GM will begin offering [Michelin’s airless] Uptis [tires] as an option on certain models ‘as early as 2024,’ and the partnership is working with US state governments on regulatory approvals for street use, as well as with the federal government. At IAA Munich recently, the Uptis airless tire got its first public outing, in which ‘certain lucky members of the public’ had a chance to ride in a Mini Electric kitted out with a set.”

SCIENCE
Biologist Rethink the Logic Behind Cells’ Molecular Signals
Phillip Ball | Quanta
“Biologists often try to understand how life works by making analogies to electronic circuits, but that comparison misses the unique qualities of cellular signaling systems. …[The] signaling systems of complex cells are nothing like simple electronic circuits. The logic governing their operation is riotously complex—but it has advantages.”

Image Credit: SpaceX / Unsplash Continue reading

Posted in Human Robots

#439743 Video Friday: Preparing for the SubT ...

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USAWeRobot 2021 – September 23-25, 2021 – [Online Event]IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAWearRAcon Europe 2021 – October 5-7, 2021 – [Online Event]ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USALet us know if you have suggestions for next week, and enjoy today's videos.
Team Explorer, the SubT Challenge entry from CMU and Oregon State University, is in the last stage of preparation for the competition this month inside the Mega Caverns cave complex in Louisville, Kentucky.
[ Explorer ]
Team CERBERUS is looking good for the SubT Final next week, too.

Autonomous subterranean exploration with the ANYmal C Robot inside the Hagerbach underground mine

[ ARL ]
I'm still as skeptical as I ever was about a big and almost certainly expensive two-armed robot that can do whatever you can program it to do (have fun with that) and seems to rely on an app store for functionality.

[ Unlimited Robotics ]
Project Mineral is using breakthroughs in artificial intelligence, sensors, and robotics to find ways to grow more food, more sustainably.
[ Mineral ]
Not having a torso or anything presumably makes this easier.

Next up, Digit limbo!
[ Hybrid Robotics ]
Paric completed layout of a 500 unit apartment complex utilizing the Dusty FieldPrinter solution. Autonomous layout on the plywood deck saved weeks worth of schedule, allowing the panelized walls to be placed sooner.
[ Dusty Robotics ]
Spot performs inspection in the Kidd Creek Mine, enabling operators to keep their distance from hazards.
[ Boston Dynamics ]
Digit's engineered to be a multipurpose machine. Meaning, it needs to be able to perform a collection of tasks in practically any environment. We do this by first ensuring the robot's physically capable. Then we help the robot perceive its surroundings, understand its surroundings, then reason a best course of action to navigate its environment and accomplish its task. This is where software comes into play. This is early AI in action.
[ Agility Robotics ]
This work proposes a compact robotic limb, AugLimb, that can augment our body functions and support the daily activities. The proposed device can be mounted on the user's upper arm, and transform into compact state without obstruction to wearers.
[ AugLimb ]
Ahold Delhaize and AIRLab need the help of academics who have knowledge of human-robot interactions, mobility, manipulation, programming, and sensors to accelerate the introduction of robotics in retail. In the AIRLab Stacking challenge, teams will work on algorithms that focus on smart retail applications, for example, automated product stacking.
[ PAL Robotics ]
Leica, not at all well known for making robots, is getting into the robotic reality capture business with a payload for Spot and a new drone.

Introducing BLK2FLY: Autonomous Flying Laser Scanner

[ Leica BLK ]
As much as I like Soft Robotics, I'm maybe not quite as optimistic as they are about the potential for robots to take over quite this much from humans in the near term.

[ Soft Robotics ]
Over the course of this video, the robot gets longer and longer and longer.

[ Transcend Robotics ]
This is a good challenge: attach a spool of electrical tape to your drone, which can unpredictably unspool itself and make sure it doesn't totally screw you up.

[ UZH ]
Two interesting short seminars from NCCR Robotics, including one on autonomous racing drones and “neophobic” mobile robots.

Dario Mantegazza: Neophobic Mobile Robots Avoid Potential Hazards

[ NCCR ]
This panel on Synergies between Automation and Robotics comes from ICRA 2021, and once you see the participant list, I bet you'll agree that it's worth a watch.

[ ICRA 2021 ]
CMU RI Seminars are back! This week we hear from Andrew E. Johnson, a Principal Robotics Systems Engineer in the Guidance and Control Section of the NASA Jet Propulsion Laboratory, on “The Search for Ancient Life on Mars Began with a Safe Landing.”

Prior mars rover missions have all landed in flat and smooth regions, but for the Mars 2020 mission, which is seeking signs of ancient life, this was no longer acceptable. Terrain relief that is ideal for the science obviously poses significant risks for landing, so a new landing capability called Terrain Relative Navigation (TRN) was added to the mission. This talk will describe the scientific goals of the mission, the Terrain Relative Navigation system design and the successful results from landing on February 18th, 2021.[ CMU RI Seminar ] Continue reading

Posted in Human Robots