Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439798 This Week’s Awesome Tech Stories From ...

ROBOTICS
How DeepMind Is Reinventing the Robot
Tom Chivers | IEEE Spectrum
“Having conquered Go and protein folding, the company turns to a really hard problem. …To get to the next level, researchers are trying to fuse AI and robotics to create an intelligence that can make decisions and control a physical body in the messy, unpredictable, and unforgiving real world.”

NANOTECH
Microscopic Metavehicles Are Pushed and Steered by Light
Ben Coxworth | New Atlas
“Although solar-powered devices are now fairly common, Swedish scientists have created something a little different. They’ve built tiny ‘metavehicles’ that are mechanically propelled and guided via waves of light. …[It’s] hoped that the technology may someday be utilized in applications such as moving micro-particles through solutions inside of or adjacent to cells.”

3D PRINTING
How an 11-Foot-Tall 3D Printer Is Helping to Create a Community
Debra Kamin | The New York Times
“When New Story broke ground on the village in 2019, it was called the world’s first community of 3D printed homes. Two years and a pandemic later, 200 homes are either under construction or have been completed, 10 of which were printed on site by Icon’s Vulcan II printer. Plans for roads, a soccer field, a school, a market and a library are in the works.”

ARTIFICIAL INTELLIGENCE
Why OpenAI’s Codex Won’t Replace Coders
Thomas Smith | IEEE Spectrum
“If you’re a software developer yourself—or your company has spent tons of money hiring them—you can breathe easy. Codex won’t replace human developers any time soon, though it may make them far more powerful, efficient, and focused.”

FUTURE
Humans Can’t Be the Sole Keepers of Scientific Knowledge
Iulia Georgescu | Wired
“It’s clear that we do not really know what we know, because nobody can read the entire literature even in their own narrow field (which includes, in addition to journal articles, PhD theses, lab notes, slides, white papers, technical notes, and reports). …To solve this problem we need to make science papers not only machine-readable but machine-understandable, by (re)writing them in a special type of programming language. In other words: Teach science to machines in the language they understand.”

SCIENCE FICTION
Dune Foresaw—and Influenced—Half a Century of Global Conflict
Andy Greenberg | Source
“…reading Dune a half century later, when many of Herbert’s environmental and psychological ideas have either blended into the mainstream or gone out of style—and in the wake of the disastrous fall of the US-backed government in Afghanistan after a 20-year war—it’s hard not to be struck, instead, by the book’s focus on human conflict: an intricate, deeply detailed world of factions relentlessly vying for power and advantage by exploiting every tool available to them.”

SPACE
Space Policy Is Finally Moving Into the 21st Century
Tatyana Woodall | MIT Technology Review
“This week, the United Nations Institute for Disarmament Research held its annual Outer Space Security Conference in Geneva, Switzerland (participants had the option to attend virtually or in person). For two days, diplomats, researchers, and military officials from around the world met to discuss threats and challenges, arms control, and space security. Their conversations provided a window into what new space policies might look like.”

Image Credit: 光曦 刘 / Unsplash Continue reading

Posted in Human Robots

#439794 Video Friday: Mini Pupper

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAWearRAcon Europe 2021 – October 5-7, 2021 – [Online Event]ROSCon 2021 – October 20-21, 2021 – [Online Event]Let us know if you have suggestions for next week, and enjoy today's videos, more below!
Mini Pupper is now on Kickstarter!

The basic kit is $250, which includes just the custom parts, so you'll need to add your own 3D printed parts, some of the electronics, and the battery. A complete Mini Pupper kit is $500, or get it fully assembled for an extra $60.
Everything should (with all the usual Kickstarter caveats in mind) ship in November, which is plenty of time to get it to me for the holidays (for any of my family reading this).
[ Mini Pupper ]
An Inflatable robotic hand design gives amputees real-time tactile control and enables a wide range of daily activities, such as zipping a suitcase, shaking hands, and petting a cat. The smart hand is soft and elastic, weighs about half a pound, and costs a fraction of comparable prosthetics.
[ MIT ]
Among the first electronic mobile robots were the experimental machines of neuroscientist W. Grey Walter. Walter studied the brain's electrical activity at the Burden Neurological Institute (BNI) near Bristol, England. His battery-powered robots were models to test his theory that a minimum number of brain cells can control complex behavior and choice.
[ NMAH ]
Autonomous Micro Aerial Vehicles (MAVs) have the potential to be employed for surveillance and monitoring tasks. By perching and staring on one or multiple locations aerial robots can save energy while concurrently increasing their overall mission time without actively flying. In this paper, we address the estimation, planning, and control problems for autonomous perching on inclined surfaces with small quadrotors using visual and inertial sensing.
[ ARPL NYU ]
Human environments are filled with large open spaces that are separated by structures like walls, facades, glass windows, etc. Most often, these structures are largely passive offering little to no interactivity. In this paper, we present Duco, a large-scale electronics fabrication robot that enables room-scale & building-scale circuitry to add interactivity to vertical everyday surfaces. Duco negates the need for any human intervention by leveraging a hanging robotic system that automatically sketches multi-layered circuity to enable novel large-scale interfaces.

The key idea behind Duco is that it achieves single-layer or multi-layer circuit fabrication on 2D surfaces as well as 2D cutouts that can be assembled into 3D objects by loading various functional inks (e.g., conductive, dielectric, or cleaning) to the wall-hanging drawing robot, as well as employing an optional laser cutting head as a cutting tool. [ Duco ]
Thanks Sai!
When you can't have robots fight each other in person because pandemic, you have to get creative.

[ ROBO-ONE ]
Baidu researchers have proposed a novel reinforcement learning-based evolutionary foot trajectory generator that can continually optimize the shape of the output trajectory for a quadrupedal robot, from walking over the balance beam to climbing up and down slopes. Our approach can solve a range of challenging tasks in simulation by learning from scratch, including walking on a balance beam and crawling through a cave. To further verify the effectiveness of our approach, we deploy the controller learned in the simulation on a 12-DoF quadrupedal robot, and it can successfully traverse challenging scenarios with efficient gaits.
[ Paper ]
This is neat: a robot with just one depth camera can poke around a little bit where it can't see, and then use those contacts to give it a better idea of what's in front of it.

[ CLASP ]
Here's a robotics problem: objects that look very similar but aren't! How can you efficiently tell the difference between objects that look almost the same, and how do you know when you need to make that determination?

[ Paper ]
Hyundai Motor Group has introduced its first project with Boston Dynamics. Meet the new 'Factory Safety Service Robot', based on Boston Dynamics' quadruped, Spot, and to support industrial site safety.
[ Boston Dynamics ]
I don't necessarily know how much credit to give DARPA for making this happen, but even small drones make constrained obstacle avoidance look so easy now.

[ ARL ]
Huh, maybe all in-home robots should have spiky wheels and articulated designs, since this seems very effective.

[ Transcend Robotics ]
Robotiq, who makes the grippers that everybody uses for everything, now has a screw driving solution.

[ Robotiq ]
Kodiak's latest autonomous truck design is interesting because of how they've structured their sensors: almost everything seems to be in two chonky pods that take the place of the wing mirrors.

[ Kodiak ]
Thanks Kylee!
An ICRA 2021 plenary talk from Robert Wood, on Soft Robotics for Delicate and Dexterous Manipulation.

[ ICRA 2021 ]
This week's Lockheed Martin Robotics Seminar features Henrik Christensen on “Deploying autonomous vehicles for micro-mobility on a university campus.”

[ UMD ] Continue reading

Posted in Human Robots

#439787 An Inconvenient Truth About AI

We are well into the third wave of major investment in artificial intelligence. So it's a fine time to take a historical perspective on the current success of AI. In the 1960s, the early AI researchers often breathlessly predicted that human-level intelligent machines were only 10 years away. That form of AI was based on logical reasoning with symbols, and was carried out with what today seem like ludicrously slow digital computers. Those same researchers considered and rejected neural networks.

This article is part of our special report on AI, “The Great AI Reckoning.”

In the 1980s, AI's second age was based on two technologies:
rule-based expert systems—a more heuristic form of symbol-based logical reasoning—and a resurgence in neural networks triggered by the emergence of new training algorithms. Again, there were breathless predictions about the end of human dominance in intelligence.

The third and current age of AI arose during the early 2000s with new symbolic-reasoning systems based on algorithms capable of solving a class of problems called
3SAT and with another advance called simultaneous localization and mapping. SLAM is a technique for building maps incrementally as a robot moves around in the world.

In the early 2010s, this wave gathered powerful new momentum with the rise of neural networks learning from massive data sets. It soon turned into a tsunami of promise, hype, and profitable applications.

Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low. In 2002,
iRobot, a company that I cofounded, introduced the first mass-market autonomous home-cleaning robot, the Roomba, at a price that severely constricted how much AI we could endow it with. The limited AI wasn't a problem, though. Our worst failure scenarios had the Roomba missing a patch of floor and failing to pick up a dustball.

That same year we started deploying the first of thousands of robots in Afghanistan and then Iraq to be used to help troops disable improvised explosive devices. Failures there could kill someone, so there was always a human in the loop giving supervisory commands to the AI systems on the robot.

These days AI systems autonomously decide what advertisements to show us on our Web pages. Stupidly chosen ads are not a big deal; in fact they are plentiful. Likewise search engines, also powered by AI, show us a list of choices so that we can skip over their mistakes with just a glance. On dating sites, AI systems choose who we see, but fortunately those sites are not arranging our marriages without us having a say in it.

So far the only self-driving systems deployed on production automobiles, no matter what the marketing people may say, are all Level 2. These systems require a human driver to keep their hands on the wheel and to stay attentive at all times so that they can take over immediately if the system is making a mistake. And there have already been fatal consequences when people were not paying attention.

Just about every successful deployment of AI has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.

These haven't been the only terrible failures of AI systems when no person was in the loop. For example, people have been wrongly arrested based on face-recognition technology that works poorly on racial minorities, making mistakes that no attentive human would make.

Sometimes we are in the loop even when the consequences of failure aren't dire. AI systems power the speech and language understanding of our smart speakers and the entertainment and navigation systems in our cars. We, the consumers, soon adapt our language to each such AI agent, quickly learning what they can and can't understand, in much the same way as we might with our children and elderly parents. The AI agents are cleverly designed to give us just enough feedback on what they've heard us say without getting too tedious, while letting us know about anything important that may need to be corrected. Here, we, the users, are the people in the loop. The ghost in the machine, if you will.

Ask not what your AI system can do for you, but instead what it has tricked you into doing for it.

SOURCE: GOOGLE NGRAMS

This article appears in the October 2021 print issue as “A Human in the Loop.”

Special Report: The Great AI Reckoning

READ NEXT:
How Deep Learning Works

Or see the full report for more articles on the future of AI. Continue reading

Posted in Human Robots

#439783 This Google-Funded Project Is Tracking ...

It’s crunch time on climate change. The IPCC’s latest report told the world just how bad it is, and…it’s bad. Companies, NGOs, and governments are scrambling for fixes, both short-term and long-term, from banning sale of combustion-engine vehicles to pouring money into hydrogen to building direct air capture plants. And one initiative, launched last week, is taking an “if you can name it, you can tame it” approach by creating an independent database that measures and tracks emissions all over the world.

Climate TRACE, which stands for tracking real-time atmospheric carbon emissions, is a collaboration between nonprofits, tech companies, and universities, including CarbonPlan, Earthrise Alliance, Johns Hopkins Applied Physics Laboratory, former US Vice President Al Gore, and others. The organization started thanks to a grant from Google, which funded an effort to measure power plant emissions using satellites. A team of fellows from Google helped build algorithms to monitor the power plants (the Google.org Fellowship was created in 2019 to let Google employees do pro bono technical work for grant recipients).

Climate TRACE uses data from satellites and other remote sensing technologies to “see” emissions. Artificial intelligence algorithms combine this data with verifiable emissions measurements to produce estimates of the total emissions coming from various sources.

These sources are divided into ten sectors—like power, manufacturing, transportation, and agriculture—each with multiple subsectors (i.e., two subsectors of agriculture are rice cultivation and manure management). The total carbon emitted January 2015 to December 2020, by the project’s estimation, was 303.96 billion tons. The biggest offender? Electricity generation. It’s no wonder, then, that states, companies, and countries are rushing to make (occasionally unrealistic) carbon-neutral pledges, and that the renewable energy industry is booming.

The founders of the initiative hope that, by increasing transparency, the database will increase accountability, thereby spurring action. Younger consumers care about climate change, and are likely to push companies and brands to do something about it.

The BBC reported that in a recent survey led by the UK’s Bath University, almost 60 percent of respondents said they were “very worried” or “extremely worried” about climate change, while more than 45 percent said feelings about the climate affected their daily lives. The survey received responses from 10,000 people aged 16 to 25, finding that young people are the most concerned with climate change in the global south, while in the northern hemisphere those most worried are in Portugal, which has grappled with severe wildfires. Many of the survey respondents, independent of location, reportedly feel that “humanity is doomed.”

Once this demographic reaches working age, they’ll be able to throw their weight around, and it seems likely they’ll do so in a way that puts the planet and its future at center stage. For all its sanctimoniousness, “naming and shaming” of emitters not doing their part may end up being both necessary and helpful.

Until now, Climate TRACE’s website points out, emissions inventories have been largely self-reported (I mean, what’s even the point?), and they’ve used outdated information and opaque measurement methods. Besides being independent, which is huge in itself, TRACE is using 59 trillion bytes of data from more than 300 satellites, more than 11,100 sensors, and other sources of emissions information.

“We’ve established a shared, open monitoring system capable of detecting essentially all forms of humanity’s greenhouse gas emissions,” said Gavin McCormick, executive director of coalition convening member WattTime. “This is a transformative step forward that puts timely information at the fingertips of all those who seek to drive significant emissions reductions on our path to net zero.”

Given the scale of the project, the parties involved, and how quickly it has all come together—the grant from Google was in May 2019—it seems Climate TRACE is well-positioned to make a difference.

Image Credit: NASA Continue reading

Posted in Human Robots

#439773 How the U.S. Army Is Turning Robots Into ...

This article is part of our special report on AI, “The Great AI Reckoning.”

“I should probably not be standing this close,” I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named
RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to “go clear a path.” It's then up to the robot to make all the decisions necessary to achieve that objective.

The ability to make decisions autonomously is not just what makes robots useful, it's what makes robots
robots. We value robots for their ability to sense what's going on around them, make decisions based on that information, and then take useful actions without our input. In the past, robotic decision making followed highly structured rules—if you sense this, then do that. In structured environments like factories, this works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

RoMan, along with many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networks—a computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a wide variety of semistructured data that had previously been very difficult for computers running rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks with multiple layers of abstraction, this technique is called deep learning.

Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally different from the way humans see the world. It's often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that difference—the “black box” opacity of deep learning—poses a potential problem for robots like RoMan and for the Army Research Lab.

In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

This opacity means that robots that rely on deep learning have to be used carefully. A deep-learning system is good at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do best when their applications are well defined and narrow in scope. “When you have well-structured inputs and outputs, and you can encapsulate your problem in that kind of relationship, I think deep learning does very well,” says
Tom Howard, who directs the University of Rochester's Robotics and Artificial Intelligence Laboratory and has developed natural-language interaction algorithms for RoMan and other ground robots. “The question when programming an intelligent robot is, at what practical size do those deep-learning building blocks exist?” Howard explains that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much more significant when that behavior is manifested through a 170-kilogram two-armed military robot.

After a couple of minutes, RoMan hasn't moved—it's still sitting there, pondering the tree branch, arms poised like a praying mantis. For the last 10 years, the Army Research Lab's Robotics Collaborative Technology Alliance (RCTA) has been working with roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other top research institutions to develop robot autonomy for use in future ground-combat vehicles. RoMan is one part of that process.

The “go clear a path” task that RoMan is slowly thinking through is difficult for a robot because the task is so abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grasp them and what kind of manipulation technique might be best to apply (like pushing, pulling, or lifting), and then make it happen. That's a lot of steps and a lot of unknowns for a robot with a limited understanding of the world.

This limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. “The Army can be called upon to operate basically anywhere in the world. We do not have a mechanism for collecting data in all the different domains in which we might be operating. We may be deployed to some unknown forest on the other side of the world, but we'll be expected to perform just as well as we would in our own backyard,” he says. Most deep-learning systems function reliably only within the domains and environments in which they've been trained. Even if the domain is something like “every drivable road in San Francisco,” the robot will do fine, because that's a data set that has already been collected. But, Stump says, that's not an option for the military. If an Army deep-learning system doesn't perform well, they can't simply solve the problem by collecting more data.

ARL's robots also need to have a broad awareness of what they're doing. “In a standard operations order for a mission, you have goals, constraints, a paragraph on the commander's intent—basically a narrative of the purpose of the mission—which provides contextual info that humans can interpret and gives them the structure for when they need to make decisions and when they need to improvise,” Stump explains. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly, depending on the mission's broader objectives. That's a big ask for even the most advanced robot. “I can't think of a deep-learning approach that can deal with this kind of information,” Stump says.

Robots at the Army Research Lab test autonomous navigation techniques in rough terrain [top, middle] with the goal of being able to keep up with their human teammates. ARL is also developing robots with manipulation capabilities [bottom] that can interact with objects so that humans don't have to.Evan Ackerman

While I watch, RoMan is reset for a second try at branch removal. ARL's approach to autonomy is modular, where deep learning is combined with other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two different ways of identifying objects from 3D sensor data: UPenn's approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional database of 3D models. Perception through search works only if you know exactly which objects you're looking for in advance, but training is much faster since you need only a single model per object. It can also be more accurate when perception of the object is difficult—if the object is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run simultaneously and compete against each other.

Perception is one of the things that deep learning tends to excel at. “The computer vision community has made crazy progress using deep learning for this stuff,” says Maggie Wigness, a computer scientist at ARL. “We've had good success with some of these models that were trained in one environment generalizing to a new environment, and we intend to keep using deep learning for these sorts of tasks, because it's the state of the art.”

ARL's modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be created or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when you're not necessarily sure what optimal behavior looks like. This is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. “When we deploy these robots, things can change very quickly,” Wigness says. “So we wanted a technique where we could have a soldier intervene, and with just a few examples from a user in the field, we can update the system if we need a new behavior.” A deep-learning technique would require “a lot more data and time,” she says.

It's not just data-sparse problems and fast adaptation that deep learning struggles with. There are also questions of robustness, explainability, and safety. “These questions aren't unique to the military,” says Stump, “but it's especially important when we're talking about systems that may incorporate lethality.” To be clear, ARL is not currently working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military more broadly, which means considering ways in which such systems may be used in the future.

The requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.

Safety is an obvious priority, and yet there isn't a clear way of making a deep-learning system verifiably safe, according to Stump. “Doing deep learning with safety constraints is a major research effort. It's hard to add those constraints into the system, because you don't know where the constraints already in the system came from. So when the mission changes, or the context changes, it's hard to deal with that. It's not even a data question; it's an architecture question.” ARL's modular architecture, whether it's a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using different techniques that are more verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. “If other information comes in and changes what we need to do, there's a hierarchy there,” Stump says. “It all happens in a rational way.”

Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as “somewhat of a rabble-rouser” due to his skepticism of some of the claims made about the power of deep learning, agrees with the ARL roboticists that deep-learning approaches often can't handle the kinds of challenges that the Army has to be prepared for. “The Army is always entering new environments, and the adversary is always going to be trying to change the environment so that the training process the robots went through simply won't match what they're seeing,” Roy says. “So the requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.”

Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a useful technology when applied to problems with clear functional relationships, but when you start looking at abstract concepts, it's not clear whether deep learning is a viable approach. “I'm very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning,” Roy says. “I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not believe that we understand how to do that yet.” Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It's harder to combine those two networks into one larger network that detects red cars than it would be if you were using a symbolic reasoning system based on structured rules with logical relationships. “Lots of people are working on this, but I haven't seen a real success that drives abstract reasoning of this kind.”

For the foreseeable future, ARL is making sure that its autonomous systems are safe and robust by keeping humans around for both higher-level reasoning and occasional low-level advice. Humans might not be directly in the loop at all times, but the idea is that humans and robots are more effective when working together as a team. When the most recent phase of the Robotics Collaborative Technology Alliance program began in 2009, Stump says, “we'd already had many years of being in Iraq and Afghanistan, where robots were often used as tools. We've been trying to figure out what we can do to transition robots from tools to acting more as teammates within the squad.”

RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn't have any fundamental knowledge about what a tree branch actually is, and this lack of world knowledge (what we think of as common sense) is a fundamental problem with autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan's job much easier. And indeed, this time RoMan manages to successfully grasp the branch and noisily haul it across the room.

Turning a robot into a good teammate can be difficult, because it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in special situations like explosive-ordnance disposal but is otherwise not efficient. Too much autonomy and you'd start to have issues with trust, safety, and explainability.

“I think the level that we're looking for here is for robots to operate on the level of working dogs,” explains Stump. “They understand exactly what we need them to do in limited circumstances, they have a small amount of flexibility and creativity if they are faced with novel circumstances, but we don't expect them to do creative problem-solving. And if they need help, they fall back on us.”

RoMan is not likely to find itself out in the field on a mission anytime soon, even as part of a team with humans. It's very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first in autonomous driving, and later in more complex robotic systems that could include mobile manipulators like RoMan. APPL combines different machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. That allows high-level goals and constraints to be applied on top of lower-level programming. Humans can use teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots adjust to new environments, while the robots can use unsupervised reinforcement learning to adjust their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. With APPL, a learning-based system like RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if it ends up in an environment that's too different from what it trained on.

It's tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, “there are lots of hard problems, but industry's hard problems are different from the Army's hard problems.” The Army doesn't have the luxury of operating its robots in structured environments with lots of data, which is why ARL has put so much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a key part of the autonomous framework that ARL is developing. “That's what we're trying to build with our robotics systems,” Stump says. “That's our bumper sticker: 'From tools to teammates.' ”

This article appears in the October 2021 print issue as “Deep Learning Goes to Boot Camp.”

Special Report: The Great AI Reckoning

READ NEXT:
7 Revealing Ways AIs Fail

Or see the full report for more articles on the future of AI. Continue reading

Posted in Human Robots