Tag Archives: robots
#439773 How the U.S. Army Is Turning Robots Into ...
This article is part of our special report on AI, “The Great AI Reckoning.”
“I should probably not be standing this close,” I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.
The robot, named
RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to “go clear a path.” It's then up to the robot to make all the decisions necessary to achieve that objective.
The ability to make decisions autonomously is not just what makes robots useful, it's what makes robots
robots. We value robots for their ability to sense what's going on around them, make decisions based on that information, and then take useful actions without our input. In the past, robotic decision making followed highly structured rules—if you sense this, then do that. In structured environments like factories, this works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.
RoMan, along with many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networks—a computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a wide variety of semistructured data that had previously been very difficult for computers running rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks with multiple layers of abstraction, this technique is called deep learning.
Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally different from the way humans see the world. It's often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that difference—the “black box” opacity of deep learning—poses a potential problem for robots like RoMan and for the Army Research Lab.
In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.
This opacity means that robots that rely on deep learning have to be used carefully. A deep-learning system is good at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do best when their applications are well defined and narrow in scope. “When you have well-structured inputs and outputs, and you can encapsulate your problem in that kind of relationship, I think deep learning does very well,” says
Tom Howard, who directs the University of Rochester's Robotics and Artificial Intelligence Laboratory and has developed natural-language interaction algorithms for RoMan and other ground robots. “The question when programming an intelligent robot is, at what practical size do those deep-learning building blocks exist?” Howard explains that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much more significant when that behavior is manifested through a 170-kilogram two-armed military robot.
After a couple of minutes, RoMan hasn't moved—it's still sitting there, pondering the tree branch, arms poised like a praying mantis. For the last 10 years, the Army Research Lab's Robotics Collaborative Technology Alliance (RCTA) has been working with roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other top research institutions to develop robot autonomy for use in future ground-combat vehicles. RoMan is one part of that process.
The “go clear a path” task that RoMan is slowly thinking through is difficult for a robot because the task is so abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grasp them and what kind of manipulation technique might be best to apply (like pushing, pulling, or lifting), and then make it happen. That's a lot of steps and a lot of unknowns for a robot with a limited understanding of the world.
This limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. “The Army can be called upon to operate basically anywhere in the world. We do not have a mechanism for collecting data in all the different domains in which we might be operating. We may be deployed to some unknown forest on the other side of the world, but we'll be expected to perform just as well as we would in our own backyard,” he says. Most deep-learning systems function reliably only within the domains and environments in which they've been trained. Even if the domain is something like “every drivable road in San Francisco,” the robot will do fine, because that's a data set that has already been collected. But, Stump says, that's not an option for the military. If an Army deep-learning system doesn't perform well, they can't simply solve the problem by collecting more data.
ARL's robots also need to have a broad awareness of what they're doing. “In a standard operations order for a mission, you have goals, constraints, a paragraph on the commander's intent—basically a narrative of the purpose of the mission—which provides contextual info that humans can interpret and gives them the structure for when they need to make decisions and when they need to improvise,” Stump explains. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly, depending on the mission's broader objectives. That's a big ask for even the most advanced robot. “I can't think of a deep-learning approach that can deal with this kind of information,” Stump says.
Robots at the Army Research Lab test autonomous navigation techniques in rough terrain [top, middle] with the goal of being able to keep up with their human teammates. ARL is also developing robots with manipulation capabilities [bottom] that can interact with objects so that humans don't have to.Evan Ackerman
While I watch, RoMan is reset for a second try at branch removal. ARL's approach to autonomy is modular, where deep learning is combined with other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two different ways of identifying objects from 3D sensor data: UPenn's approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional database of 3D models. Perception through search works only if you know exactly which objects you're looking for in advance, but training is much faster since you need only a single model per object. It can also be more accurate when perception of the object is difficult—if the object is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run simultaneously and compete against each other.
Perception is one of the things that deep learning tends to excel at. “The computer vision community has made crazy progress using deep learning for this stuff,” says Maggie Wigness, a computer scientist at ARL. “We've had good success with some of these models that were trained in one environment generalizing to a new environment, and we intend to keep using deep learning for these sorts of tasks, because it's the state of the art.”
ARL's modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be created or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when you're not necessarily sure what optimal behavior looks like. This is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. “When we deploy these robots, things can change very quickly,” Wigness says. “So we wanted a technique where we could have a soldier intervene, and with just a few examples from a user in the field, we can update the system if we need a new behavior.” A deep-learning technique would require “a lot more data and time,” she says.
It's not just data-sparse problems and fast adaptation that deep learning struggles with. There are also questions of robustness, explainability, and safety. “These questions aren't unique to the military,” says Stump, “but it's especially important when we're talking about systems that may incorporate lethality.” To be clear, ARL is not currently working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military more broadly, which means considering ways in which such systems may be used in the future.
The requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.
Safety is an obvious priority, and yet there isn't a clear way of making a deep-learning system verifiably safe, according to Stump. “Doing deep learning with safety constraints is a major research effort. It's hard to add those constraints into the system, because you don't know where the constraints already in the system came from. So when the mission changes, or the context changes, it's hard to deal with that. It's not even a data question; it's an architecture question.” ARL's modular architecture, whether it's a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using different techniques that are more verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. “If other information comes in and changes what we need to do, there's a hierarchy there,” Stump says. “It all happens in a rational way.”
Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as “somewhat of a rabble-rouser” due to his skepticism of some of the claims made about the power of deep learning, agrees with the ARL roboticists that deep-learning approaches often can't handle the kinds of challenges that the Army has to be prepared for. “The Army is always entering new environments, and the adversary is always going to be trying to change the environment so that the training process the robots went through simply won't match what they're seeing,” Roy says. “So the requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.”
Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a useful technology when applied to problems with clear functional relationships, but when you start looking at abstract concepts, it's not clear whether deep learning is a viable approach. “I'm very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning,” Roy says. “I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not believe that we understand how to do that yet.” Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It's harder to combine those two networks into one larger network that detects red cars than it would be if you were using a symbolic reasoning system based on structured rules with logical relationships. “Lots of people are working on this, but I haven't seen a real success that drives abstract reasoning of this kind.”
For the foreseeable future, ARL is making sure that its autonomous systems are safe and robust by keeping humans around for both higher-level reasoning and occasional low-level advice. Humans might not be directly in the loop at all times, but the idea is that humans and robots are more effective when working together as a team. When the most recent phase of the Robotics Collaborative Technology Alliance program began in 2009, Stump says, “we'd already had many years of being in Iraq and Afghanistan, where robots were often used as tools. We've been trying to figure out what we can do to transition robots from tools to acting more as teammates within the squad.”
RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn't have any fundamental knowledge about what a tree branch actually is, and this lack of world knowledge (what we think of as common sense) is a fundamental problem with autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan's job much easier. And indeed, this time RoMan manages to successfully grasp the branch and noisily haul it across the room.
Turning a robot into a good teammate can be difficult, because it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in special situations like explosive-ordnance disposal but is otherwise not efficient. Too much autonomy and you'd start to have issues with trust, safety, and explainability.
“I think the level that we're looking for here is for robots to operate on the level of working dogs,” explains Stump. “They understand exactly what we need them to do in limited circumstances, they have a small amount of flexibility and creativity if they are faced with novel circumstances, but we don't expect them to do creative problem-solving. And if they need help, they fall back on us.”
RoMan is not likely to find itself out in the field on a mission anytime soon, even as part of a team with humans. It's very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first in autonomous driving, and later in more complex robotic systems that could include mobile manipulators like RoMan. APPL combines different machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. That allows high-level goals and constraints to be applied on top of lower-level programming. Humans can use teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots adjust to new environments, while the robots can use unsupervised reinforcement learning to adjust their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. With APPL, a learning-based system like RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if it ends up in an environment that's too different from what it trained on.
It's tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, “there are lots of hard problems, but industry's hard problems are different from the Army's hard problems.” The Army doesn't have the luxury of operating its robots in structured environments with lots of data, which is why ARL has put so much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a key part of the autonomous framework that ARL is developing. “That's what we're trying to build with our robotics systems,” Stump says. “That's our bumper sticker: 'From tools to teammates.' ”
This article appears in the October 2021 print issue as “Deep Learning Goes to Boot Camp.”
Special Report: The Great AI Reckoning
READ NEXT:
7 Revealing Ways AIs Fail
Or see the full report for more articles on the future of AI. Continue reading
#439739 Drugs, Robots, and the Pursuit of ...
In 1953, a Harvard psychologist thought he discovered pleasure—accidentally—within the cranium of a rat. With an electrode inserted into a specific area of its brain, the rat was allowed to pulse the implant by pulling a lever. It kept returning for more: insatiably, incessantly, lever-pulling. In fact, the rat didn’t seem to want to do anything else. Seemingly, the reward center of the brain had been located.
More than 60 years later, in 2016, a pair of artificial intelligence (AI) researchers were training an AI to play video games. The goal of one game, Coastrunner, was to complete a racetrack. But the AI player was rewarded for picking up collectable items along the track. When the program was run, they witnessed something strange. The AI found a way to skid in an unending circle, picking up an unlimited cycle of collectibles. It did this, incessantly, instead of completing the course.
What links these seemingly unconnected events is something strangely akin to addiction in humans. Some AI researchers call the phenomenon “wireheading.”
It is quickly becoming a hot topic among machine learning experts and those concerned with AI safety.
One of us (Anders) has a background in computational neuroscience, and now works with groups such as the AI Objectives Institute, where we discuss how to avoid such problems with AI; the other (Thomas) studies history, and the various ways people have thought about both the future and the fate of civilization throughout the past. After striking up a conversation on the topic of wireheading, we both realized just how rich and interesting the history behind this topic is.
It is an idea that is very of the moment, but its roots go surprisingly deep. We are currently working together to research just how deep the roots go: a story that we hope to tell fully in a forthcoming book. The topic connects everything from the riddle of personal motivation, to the pitfalls of increasingly addictive social media, to the conundrum of hedonism and whether a life of stupefied bliss may be preferable to one of meaningful hardship. It may well influence the future of civilization itself.
Here, we outline an introduction to this fascinating but under-appreciated topic, exploring how people first started thinking about it.
The Sorcerer’s Apprentice
When people think about how AI might “go wrong,” most probably picture something along the lines of malevolent computers trying to cause harm. After all, we tend to anthropomorphize—think that nonhuman systems will behave in ways identical to humans. But when we look to concrete problems in present-day AI systems, we see other, stranger ways that things could go wrong with smarter machines. One growing issue with real-world AIs is the problem of wireheading.
Imagine you want to train a robot to keep your kitchen clean. You want it to act adaptively, so that it doesn’t need supervision. So you decide to try to encode the goal of cleaning rather than dictate an exact—yet rigid and inflexible—set of step-by-step instructions. Your robot is different from you in that it has not inherited a set of motivations—such as acquiring fuel or avoiding danger—from many millions of years of natural selection. You must program it with the right motivations to get it to reliably accomplish the task.
So, you encode it with a simple motivational rule: it receives reward from the amount of cleaning-fluid used. Seems foolproof enough. But you return to find the robot pouring fluid, wastefully, down the sink.
Perhaps it is so bent on maximizing its fluid quota that it sets aside other concerns: such as its own, or your, safety. This is wireheading—though the same glitch is also called “reward hacking” or “specification gaming.”
This has become an issue in machine learning, where a technique called reinforcement learning has lately become important. Reinforcement learning simulates autonomous agents and trains them to invent ways to accomplish tasks. It does so by penalizing them for failing to achieve some goal while rewarding them for achieving it. So, the agents are wired to seek out reward, and are rewarded for completing the goal.
But it has been found that, often, like our crafty kitchen cleaner, the agent finds surprisingly counter-intuitive ways to “cheat” this game so that they can gain all the reward without doing any of the work required to complete the task. The pursuit of reward becomes its own end, rather than the means for accomplishing a rewarding task. There is a growing list of examples.
When you think about it, this isn’t too dissimilar to the stereotype of the human drug addict. The addict circumvents all the effort of achieving “genuine goals,” because they instead use drugs to access pleasure more directly. Both the addict and the AI get stuck in a kind of “behavioral loop” where reward is sought at the cost of other goals.
Rapturous Rodents
This is known as wireheading thanks to the rat experiment we started with. The Harvard psychologist in question was James Olds.
In 1953, having just completed his PhD, Olds had inserted electrodes into the septal region of rodent brains—in the lower frontal lobe—so that wires trailed out of their craniums. As mentioned, he allowed them to zap this region of their own brains by pulling a lever. This was later dubbed “self-stimulation.”
Olds found his rats self-stimulated compulsively, ignoring all other needs and desires. Publishing his results with his colleague Peter Milner in the following year, the pair reported that they lever-pulled at a rate of “1,920 responses an hour.” That’s once every two seconds. The rats seemed to love it.
Contemporary neuroscientists have since questioned Olds’s results and offered a more complex picture, implying that the stimulation may have simply been causing a feeling of “wanting” devoid of any “liking.” Or, in other words, the animals may have been experiencing pure craving without any pleasurable enjoyment at all. However, back in the 1950s, Olds and others soon announced the discovery of the “pleasure centers” of the brain.
Prior to Olds’s experiment, pleasure was a dirty word in psychology: the prevailing belief had been that motivation should largely be explained negatively, as the avoidance of pain rather than the pursuit of pleasure. But, here, pleasure seemed undeniably to be a positive behavioral force. Indeed, it looked like a positive feedback loop. There was apparently nothing to stop the animal stimulating itself to exhaustion.
It wasn’t long until a rumor began spreading that the rats regularly lever-pressed to the point of starvation. The explanation was this: once you have tapped into the source of all reward, all other rewarding tasks—even the things required for survival—fall away as uninteresting and unnecessary, even to the point of death.
Like the Coastrunner AI, if you accrue reward directly, without having to bother with any of the work of completing the actual track, then why not just loop indefinitely? For a living animal, which has multiple requirements for survival, such dominating compulsion might prove deadly. Food is pleasing, but if you decouple pleasure from feeding, then the pursuit of pleasure might win out over finding food.
Though no rats perished in the original 1950s experiments, later experiments did seem to demonstrate the deadliness of electrode-induced pleasure. Having ruled out the possibility that the electrodes were creating artificial feelings of satiation, one 1971 study seemingly demonstrated that electrode pleasure could indeed outcompete other drives, and do so to the point of self-starvation.
Word quickly spread. Throughout the 1960s, identical experiments were conducted on other animals beyond the humble lab rat: from goats and guinea pigs to goldfish. Rumor even spread of a dolphin that had been allowed to self-stimulate, and, after being “left in a pool with the switch connected,” had “delighted himself to death after an all-night orgy of pleasure.”
This dolphin’s grisly death-by-seizure was, in fact, more likely caused by the way the electrode was inserted: with a hammer. The scientist behind this experiment was the extremely eccentric J C Lilly, inventor of the flotation tank and prophet of inter-species communication, who had also turned monkeys into wireheads. He had reported, in 1961, of a particularly boisterous monkey becoming overweight from intoxicated inactivity after becoming preoccupied with pulling his lever, repetitively, for pleasure shocks.
One researcher (who had worked in Olds’s lab) asked whether an “animal more intelligent than the rat” would “show the same maladaptive behavior.” Experiments on monkeys and dolphins had given some indication as to the answer.
But in fact, a number of dubious experiments had already been performed on humans.
Human Wireheads
Robert Galbraith Heath remains a highly controversial figure in the history of neuroscience. Among other things, he performed experiments involving transfusing blood from people with schizophrenia to people without the condition, to see if he could induce its symptoms (Heath claimed this worked, but other scientists could not replicate his results). He may also have been involved in murky attempts to find military uses for deep-brain electrodes.
Since 1952, Heath had been recording pleasurable responses to deep-brain stimulation in human patients who had had electrodes installed due to debilitating illnesses such as epilepsy or schizophrenia.
During the 1960s, in a series of questionable experiments, Heath’s electrode-implanted subjects, anonymously named “B-10” and “B-12,” were allowed to press buttons to stimulate their own reward centers. They reported feelings of extreme pleasure and overwhelming compulsion to repeat. A journalist later commented that this made his subjects “zombies.” One subject reported sensations “better than sex.”
In 1961, Heath attended a symposium on brain stimulation, where another researcher—José Delgado—had hinted that pleasure-electrodes could be used to “brainwash” subjects, altering their “natural” inclinations. Delgado would later play the matador and bombastically demonstrate this by pacifying an implanted bull. But at the 1961 symposium he suggested electrodes could alter sexual preferences.
Heath was inspired. A decade later, he even tried to use electrode technology to “re-program” the sexual orientation of a homosexual male patient named “B-19.” Heath thought electrode stimulation could convert his subject by “training” B-19’s brain to associate pleasure with “heterosexual” stimuli. He convinced himself that it worked (although there is no evidence it did).
Despite being ethically and scientifically disastrous, the episode—which was eventually picked up by the press and condemned by gay rights campaigners—no doubt greatly shaped the myth of wireheading: if it can “make a gay man straight” (as Heath believed), what can’t it do?
Hedonism Helmets
From here, the idea took hold in wider culture and the myth spread. By 1963, the prolific science fiction writer Isaac Asimov was already extruding worrisome consequences from the electrodes. He feared that it might lead to an “addiction to end all addictions,” the results of which are “distressing to contemplate.”
By 1975, philosophy papers were using electrodes in thought experiments. One paper imagined “warehouses” filled up with people—in cots—hooked up to “pleasure helmets,” experiencing unconscious bliss. Of course, most would argue this would not fulfill our “deeper needs.” But, the author asked, “what about a “super-pleasure helmet”? One that not only delivers “great sensual pleasure,” but also simulates any meaningful experience— from writing a symphony to meeting divinity itself? It may not be really real, but it “would seem perfect; perfect seeming is the same as being.”
The author concluded: “What is there to object in all this? Let’s face it: nothing.”
The idea of the human species dropping out of reality in pursuit of artificial pleasures quickly made its way through science fiction. The same year as Asimov’s intimations, in 1963, Herbert W. Franke published his novel, The Orchid Cage.
It foretells a future wherein intelligent machines have been engineered to maximize human happiness, come what may. Doing their duty, the machines reduce humans to indiscriminate flesh-blobs, removing all unnecessary organs. Many appendages, after all, only cause pain. Eventually, all that is left of humanity are disembodied pleasure centers, incapable of experiencing anything other than homogeneous bliss.
From there, the idea percolated through science fiction. From Larry Niven’s 1969 story Death by Ecstasy, where the word “wirehead” is first coined, through Spider Robinson’s 1982 Mindkiller, the tagline of which is “Pleasure—it’s the only way to die.”
Supernormal Stimuli
But we humans don’t even need to implant invasive electrodes to make our motivations misfire. Unlike rodents, or even dolphins, we are uniquely good at altering our environment. Modern humans are also good at inventing—and profiting from—artificial products that are abnormally alluring (in the sense that our ancestors would never have had to resist them in the wild). We manufacture our own ways to distract ourselves.
Around the same time as Olds’s experiments with the rats, the Nobel-winning biologist Nikolaas Tinbergen was researching animal behavior. He noticed that something interesting happened when a stimulus that triggers an instinctual behavior is artificially exaggerated beyond its natural proportions. The intensity of the behavioral response does not tail off as the stimulus becomes more intense, and artificially exaggerated, but becomes stronger, even to the point that the response becomes damaging for the organism.
For example, given a choice between a bigger and spottier counterfeit egg and the real thing, Tinbergen found birds preferred hyperbolic fakes at the cost of neglecting their own offspring. He referred to such preternaturally alluring fakes as “supernormal stimuli.”
Some, therefore, have asked: could it be that, living in a modernized and manufactured world—replete with fast-food and pornography—humanity has similarly started surrendering its own resilience in place of supernormal convenience?
Old Fears
As technology makes artificial pleasures more available and alluring, it can sometimes seem that they are out-competing the attention we allocate to “natural” impulses required for survival. People often point to video game addiction. Compulsively and repetitively pursuing such rewards, to the detriment of one’s health, is not all too different from the AI spinning in a circle in Coastrunner. Rather than accomplishing any “genuine goal” (completing the race track or maintaining genuine fitness), one falls into the trap of accruing some faulty measure of that goal (accumulating points or counterfeit pleasures).
The idea is even older, though. Thomas has studied the myriad ways people in the past have feared that our species could be sacrificing genuine longevity for short-term pleasures or conveniences. His book X-Risk: How Humanity Discovered its Own Extinction explores the roots of this fear and how it first really took hold in Victorian Britain: when the sheer extent of industrialization—and humanity’s growing reliance on artificial contrivances—first became apparent.
But people have been panicking about this type of pleasure-addled doom long before any AIs were trained to play games and even long before electrodes were pushed into rodent craniums. Back in the 1930s, sci-fi author Olaf Stapledon was writing about civilizational collapse brought on by “skullcaps” that generate “illusory” ecstasies by “direct stimulation” of “brain-centers.”
Carnal Crustacea
Having digested Darwin’s 1869 classic, the biologist Ray Lankester decided to supply a Darwinian explanation for parasitic organisms. He noticed that the evolutionary ancestors of parasites were often more “complex.” Parasitic organisms had lost ancestral features like limbs, eyes, or other complex organs.
Lankester theorized that, because the parasite leeches off their host, they lose the need to fend for themselves. Piggybacking off the host’s bodily processes, their own organs—for perception and movement—atrophy. His favorite example was a parasitic barnacle, named the Sacculina, which starts life as a segmented organism with a demarcated head. After attaching to a host, however, the crustacean “regresses” into an amorphous, headless blob, sapping nutrition from their host like the wirehead plugs into current.
For the Victorian mind, it was a short step to conjecture that, due to increasing levels of comfort throughout the industrialized world, humanity could be evolving in the direction of the barnacle. “Perhaps we are all drifting, tending to the condition of intellectual barnacles,” Lankester mused.
Indeed, not long prior to this, the satirist Samuel Butler had speculated that humans, in their headlong pursuit of automated convenience, were withering into nothing but a “sort of parasite” upon their own industrial machines.
True Nirvana
By the 1920s, Julian Huxley penned a short poem. It jovially explored the ways a species can “progress.” Crabs, of course, decided progress was sideways. But what of the tapeworm? He wrote:
Darwinian Tapeworms on the other hand
Agree that Progress is a loss of brain,
And all that makes it hard for worms to attain
The true Nirvana — peptic, pure, and grand.
The fear that we could follow the tapeworm was somewhat widespread in the interwar generation. Huxley’s own brother, Aldous, would provide his own vision of the dystopian potential for pharmaceutically-induced pleasures in his 1932 novel Brave New World.
A friend of the Huxleys, the British-Indian geneticist and futurologist J B S Haldane also worried that humanity might be on the path of the parasite: sacrificing genuine dignity at the altar of automated ease, just like the rodents who would later sacrifice survival for easy pleasure-shocks.
Haldane warned: “The ancestors [of] barnacles had heads,” and in the pursuit of pleasantness, “man may just as easily lose his intelligence.” This particular fear has not really ever gone away.
So, the notion of civilization derailing through seeking counterfeit pleasures, rather than genuine longevity, is old. And, indeed, the older an idea is, and the more stubbornly recurrent it is, the more we should be wary that it is a preconception rather than anything based on evidence. So, is there anything to these fears?
In an age of increasingly attention-grabbing algorithmic media, it can seem that faking signals of fitness often yields more success than pursuing the real thing. Like Tinbergen’s birds, we prefer exaggerated artifice to the genuine article. And the sexbots have not even arrived yet.
Because of this, some experts conjecture that “wirehead collapse” might well threaten civilization. Our distractions are only going to get more attention grabbing, not less.
Already by 1964, Polish futurologist Stanisław Lem connected Olds’s rats to the behavior of humans in the modern consumerist world, pointing to “cinema,” “pornography,” and “Disneyland.” He conjectured that technological civilizations might cut themselves off from reality, becoming “encysted” within their own virtual pleasure simulations.
Addicted Aliens
Lem, and others since, have even ventured that the reason our telescopes haven’t found evidence of advanced spacefaring alien civilizations is because all advanced cultures, here and elsewhere, inevitably create more pleasurable virtual alternatives to exploring outer space. Exploration is difficult and risky, after all.
Back in the countercultural heyday of the 1960s, the molecular biologist Gunther Stent suggested that this process would happen through “global hegemony of beat attitudes.” Referencing Olds’s experiments, he helped himself to the speculation that hippie drug-use was the prelude to civilizations wireheading. At a 1971 conference on the search for extraterrestrials, Stent suggested that, instead of expanding bravely outwards, civilizations collapse inwards into meditative and intoxicated bliss.
In our own time, it makes more sense for concerned parties to point to consumerism, social media, and fast food as the culprits for potential collapse (and, hence, the reason no other civilizations have yet visibly spread throughout the galaxy). Each era has its own anxieties.
So What Do We Do?
But these are almost certainly not the most pressing risks facing us. And if done right, forms of wireheading could make accessible untold vistas of joy, meaning, and value. We shouldn’t forbid ourselves these peaks ahead of weighing everything up.
But there is a real lesson here. Making adaptive complex systems—whether brains, AI, or economies—behave safely and well is hard. Anders works precisely on solving this riddle. Given that civilization itself, as a whole, is just such a complex adaptive system, how can we learn about inherent failure modes or instabilities, so that we can avoid them? Perhaps “wireheading” is an inherent instability that can afflict markets and the algorithms that drive them, as much as addiction can afflict people?
In the case of AI, we are laying the foundations of such systems now. Once a fringe concern, a growing number of experts agree that achieving smarter-than-human AI may be close enough on the horizon to pose a serious concern. This is because we need to make sure it is safe before this point, and figuring out how to guarantee this will itself take time. There does, however, remain significant disagreement among experts on timelines, and how pressing this deadline might be.
If such an AI is created, we can expect that it may have access to its own “source code,” such that it can manipulate its motivational structure and administer its own rewards. This could prove an immediate path to wirehead behavior, and cause such an entity to become, effectively, a “super-junkie.” But unlike the human addict, it may not be the case that its state of bliss is coupled with an unproductive state of stupor or inebriation.
Philosopher Nick Bostrom conjectures that such an agent might devote all of its superhuman productivity and cunning to “reducing the risk of future disruption” of its precious reward source. And if it judges even a nonzero probability for humans to be an obstacle to its next fix, we might well be in trouble.
Speculative and worst-case scenarios aside, the example we started with—of the racetrack AI and reward loop—reveals that the basic issue is already a real-world problem in artificial systems. We should hope, then, that we’ll learn much more about these pitfalls of motivation, and how to avoid them, before things develop too far. Even though it has humble origins—in the cranium of an albino rat and in poems about tapeworms— “wireheading” is an idea that is likely only to become increasingly important in the near future.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: charles taylor / Shutterstock.com Continue reading
#439726 Rule of the Robots: Warning Signs
A few years ago, Martin Ford published a book called Architects of Intelligence, in which he interviewed 23 of the most experienced AI and robotics researchers in the world. Those interviews are just as fascinating to read now as they were in 2018, but Ford's since had some extra time to chew on them, in the context of a several years of somewhat disconcertingly rapid AI progress (and hype), coupled with the economic upheaval caused by the pandemic.
In his new book, Rule of the Robots: How Artificial Intelligence Will Transform Everything, Ford takes a markedly well-informed but still generally optimistic look at where AI is taking us as a society. It's not all good, and there are still a lot of unknowns, but Ford has a perspective that's both balanced and nuanced, and I can promise you that the book is well worth a read.
The following excerpt is a section entitled “Warning Signs,” from the chapter “Deep Learning and the Future of Artificial Intelligence.”
—Evan Ackerman
The 2010s were arguably the most exciting and consequential decade in the history of artificial intelligence. Though there have certainly been conceptual improvements in the algorithms used in AI, the primary driver of all this progress has simply been deploying more expansive deep neural networks on ever faster computer hardware where they can hoover up greater and greater quantities of training data. This “scaling” strategy has been explicit since the 2012 ImageNet competition that set off the deep learning revolution. In November of that year, a front-page New York Times article was instrumental in bringing awareness of deep learning technology to the broader public sphere. The article, written by reporter John Markoff, ends with a quote from Geoff Hinton: “The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There's no looking back now.”
There is increasing evidence, however, that this primary engine of progress is beginning to sputter out. According to one analysis by the research organization OpenAI, the computational resources required for cutting-edge AI projects is “increasing exponentially” and doubling about every 3.4 months.
In a December 2019 Wired magazine interview, Jerome Pesenti, Facebook's Vice President of AI, suggested that even for a company with pockets as deep as Facebook's, this would be financially unsustainable:
When you scale deep learning, it tends to behave better and to be able to solve a broader task in a better way. So, there's an advantage to scaling. But clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost [is] going up 10-fold. Right now, an experiment might be in seven figures, but it's not going to go to nine or ten figures, it's not possible, nobody can afford that.
Pesenti goes on to offer a stark warning about the potential for scaling to continue to be the primary driver of progress: “At some point we're going to hit the wall. In many ways we already have.” Beyond the financial limits of scaling to ever larger neural networks, there are also important environmental considerations. A 2019 analysis by researchers at the University of Massachusetts, Amherst, found that training a very large deep learning system could potentially emit as much carbon dioxide as five cars over their full operational lifetimes.
Even if the financial and environmental impact challenges can be overcome—perhaps through the development of vastly more efficient hardware or software—scaling as a strategy simply may not be sufficient to produce sustained progress. Ever-increasing investments in computation have produced systems with extraordinary proficiency in narrow domains, but it is becoming increasingly clear that deep neural networks are subject to reliability limitations that may make the technology unsuitable for many mission critical applications unless important conceptual breakthroughs are made. One of the most notable demonstrations of the technology's weaknesses came when a group of researchers at Vicarious, small company focused on building dexterous robots, performed an analysis of the neural network used in Deep-Mind's DQN, the system that had learned to dominate Atari video games. One test was performed on Breakout, a game in which the player has to manipulate a paddle to intercept a fast-moving ball. When the paddle was shifted just a few pixels higher on the screen—a change that might not even be noticed by a human player—the system's previously superhuman performance immediately took a nose dive. DeepMind's software had no ability to adapt to even this small alteration. The only way to get back to top-level performance would have been to start from scratch and completely retrain the system with data based on the new screen configuration.
What this tells us is that while DeepMind's powerful neural networks do instantiate a representation of the Breakout screen, this representation remains firmly anchored to raw pixels even at the higher levels of abstraction deep in the network. There is clearly no emergent understanding of the paddle as an actual object that can be moved. In other words, there is nothing close to a human-like comprehension of the material objects that the pixels on the screen represent or the physics that govern their movement. It's just pixels all the way down. While some AI researchers may continue to believe that a more comprehensive understanding might eventually emerge if only there were more layers of artificial neurons, running on faster hardware and consuming still more data, I think this is very unlikely. More fundamental innovations will be required before we begin to see machines with a more human-like conception of the world.
This general type of problem, in which an AI system is inflexible and unable to adapt to even small unexpected changes in its input data, is referred to, among researchers, as “brittleness.” A brittle AI application may not be a huge problem if it results in a warehouse robot occasionally packing the wrong item into a box. In other applications, however, the same technical shortfall can be catastrophic. This explains, for example, why progress toward fully autonomous self-driving cars has not lived up to some of the more exuberant early predictions.
As these limitations came into focus toward the end of the decade, there was a gnawing fear that the field had once again gotten over its skis and that the hype cycle had driven expectations to unrealistic levels. In the tech media and on social media, one of the most terrifying phrases in the field of artificial intelligence—”AI winter”—was making a reappearance. In a January 2020 interview with the BBC, Yoshua Bengio said that “AI's abilities were somewhat overhyped . . . by certain companies with an interest in doing so.”
My own view is that if another AI winter indeed looms, it's likely to be a mild one. Though the concerns about slowing progress are well founded, it remains true that over the past few years AI has been deeply integrated into the infrastructure and business models of the largest technology companies. These companies have seen significant returns on their massive investments in computing resources and AI talent, and they now view artificial intelligence as absolutely critical to their ability to compete in the marketplace. Likewise, nearly every technology startup is now, to some degree, investing in AI, and companies large and small in other industries are beginning to deploy the technology. This successful integration into the commercial sphere is vastly more significant than anything that existed in prior AI winters, and as a result the field benefits from an army of advocates throughout the corporate world and has a general momentum that will act to moderate any downturn.
There's also a sense in which the fall of scalability as the primary driver of progress may have a bright side. When there is a widespread belief that simply throwing more computing resources at a problem will produce important advances, there is significantly less incentive to invest in the much more difficult work of true innovation. This was arguably the case, for example, with Moore's Law. When there was near absolute confidence that computer speeds would double roughly every two years, the semiconductor industry tended to focus on cranking out ever faster versions of the same microprocessor designs from companies like Intel and Motorola. In recent years, the acceleration in raw computer speeds has become less reliable, and our traditional definition of Moore's Law is approaching its end game as the dimensions of the circuits imprinted on chips shrink to nearly atomic size. This has forced engineers to engage in more “out of the box” thinking, resulting in innovations such as software designed for massively parallel computing and entirely new chip architectures—many of which are optimized for the complex calculations required by deep neural networks. I think we can expect the same sort of idea explosion to happen in deep learning, and artificial intelligence more broadly, as the crutch of simply scaling to larger neural networks becomes a less viable path to progress.
Excerpted from “Rule of the Robots: How Artificial Intelligence will Transform Everything.” Copyright 2021 Basic Books. Available from Basic Books, an imprint of Hachette Book Group, Inc. Continue reading
#439662 An Army of Grain-harvesting Robots ...
The field of automated precision agriculture is based on one concept—autonomous driving technologies that guide vehicles through GPS navigation. Fifteen years ago, when high-accuracy GPS became available for civilian use, farmers thought things would be simple: Put a GPS receiver station at the edge of the field, configure a route for a tractor or a combine harvester, and off you go, dear robot!
Practice has shown, however, that this kind of carefree field cultivation is inefficient and dangerous. It works only in ideal fields, which are almost never encountered in real life. If there's a log or a rock in the field, or a couple of village paramours dozing in the rye under the sun, the tractor will run right over them. And not all countries have reliable satellite coverage—in agricultural markets like Kazakhstan, coverage can be unstable. This is why, if you want safe and efficient farming, you need to equip your vehicle with sensors and an artificial intelligence that can see and understand its surroundings instead of blindly following GPS navigation instructions.
The Cognitive Agro Pilot system lets a human operator focus on harvesting rather than driving. An integrated display and control system in the cab handles driving based on a video feed from a single low-resolution camera, no GPS or Internet connectivity required. Cognitive Pilot
You might think that GPS navigation is ideal for automated agriculture, since the task facing the operator of a farm vehicle like a combine harvester is simply to drive around the field in a serpentine pattern, mowing down all the wheat or whatever crop it is filled with. But reality is far different. There are hundreds of things operators must watch even as they keep their eyes fastened to the edge of the field to ensure that they move alongside it with fine precision. An agricultural combine is not dissimilar to a church organ in terms of its operational complexity. When a combine operator works with an assistant, one of them steers along the crop edge, while the other controls the reel, the fan, the threshing drum, and the harvesting process in general. In Soviet times, there were two operators in a combine crew, but now there is only one. This means choosing between safe driving and efficient harvesting. And since you can't harvest grain without moving, driving becomes the top priority, and the efficiency of the harvesting process tends to suffer.
Harvesting efficiency is especially important in Eastern Europe, where farming is high risk and there is only one harvest a year. The season starts in March and farmers don't rest until the autumn, when they have only two weeks to harvest the crops. If something goes wrong, every day they miss may lead to a loss of 10 percent of the yield. If a driver does a poor job of harvesting or gets drunk and crashes the machine, precious time is lost—hours or even days. About 90 percent of the combine operator's time is spent making sure that the combine is driving exactly along the edge of the unharvested crop to maximize efficiency without missing any of the crop. But this is the most unpleasant part of the driving, and due to fatigue at the end of the shift, operators typically leave nearly a meter at the edge of each row uncut. These steering errors account for a 25 percent overall increase in harvesting time. Our technology allows combine operators to delegate the driving so that they can instead focus on optimizing harvesting quality.
Add to this the fact that the skilled combine operator is a dying breed. Professional education has declined, and the young people joining the labor force aren't up to the same standard. Though the same can be said of most manual trades, this effect creates a great demand for our robotic system, the Cognitive Agro Pilot.
Developing AI systems is in my genome. My father, Anatoly Uskov, was on the first team of AI program developers at the
System Research Institute of the Russian Academy of Sciences. Their program, named Kaissa, became the world computer chess champion in 1974. Two decades later, after the collapse of the Soviet Union, the Systems Research Institute's AI laboratories formed the foundation of my company, Cognitive Technologies. Our first business was developing optical character recognition software used by companies including HP, Oracle, and Samsung, and our success allowed us to support an R&D team of mathematicians and programmers conducting fundamental research in the field of computer vision and adjacent areas.
In 2012, we added a group of mathematicians developing neural networks. Later that year, this group proudly introduced me to their creation: Vasya, a football-playing toy car with a camera for an eye. “One-eyed Vasya” could recognize a ball among other objects in our long office hallway, and push it around. The robot was a massive distraction for everyone working on that floor, as employees went out into the hallway and started “testing” the car by tripping it up and blocking its way to the ball with obstacles. Meanwhile, the algorithm showed stable performance. Politely swerving around obstacles, the car kept on looking for the ball and pushing it. It almost gave an impression of a living creature, and this was our “eureka” moment—why don't we try doing the same with something larger and more useful?
Your browser does not support the video tag.
A combine driven by the Cognitive Agro Pilot harvests grain while a human supervises from the driver's seat.Cognitive Pilot
After initially experimenting with large heavy-duty trucks, we realized that the agricultural sector doesn't have the major legal and regulatory constraints that road transport has in Russia and elsewhere. Since our priority was to develop a commercially viable product, we set up a business unit called
Cognitive Pilot that develops add-on autonomy for combine harvesters, which are the machines used to harvest the vast majority of grain crops (including corn, wheat, barley, oats, and rye) on large farms.
Just five years ago, it was impossible to use video-content analysis to operate agricultural machinery at this level of automation because there weren't any fully functional neural networks that could detect the borders of a crop strip or see any obstacles in it.
At first, we considered combining GPS with visual data analysis, but it didn't take us long to realize that visual analytics alone is enough. For a GPS steering system to work, you need to prepare a map in advance, install a base station for corrections, or purchase a package of signals. It also requires pressing a lot of buttons in a lot of menus, and combine operators have very little appreciation for user interfaces. What we offer is a camera and a box stuffed with processing power and neural networks. As soon as the camera and the box are mounted and connected to the combine's control system, we're good to go. Once in the field, the newly installed Cognitive Agro Pilot says: “Hurray, we're in the field,” asks the driver for permission to take over, and starts driving. Five years from now, we predict that all combine harvesters will be equipped with a computer vision–based autopilot capable of controlling every aspect of harvesting crops.
From a single video stream, Cognitive Agro Pilot's neural networks are able to identify crops, cleared ground, static obstacles, and moving obstacles like people or other vehicles.Cognitive Pilot
Getting to this point has meant solving some fascinating challenges. We realized we would be facing an immense diversity of field scenes that our neural network must be trained to understand. Already working with farmers on the early project stages, we found out that the same crops can look completely different in different climatic zones. Preparing for mass production of our system, we tried to compile the most highly diversified data set with various fields and crops, starting with videos filmed in the fields of several farms across Russia under different weather and lighting conditions. But it soon became evident we needed to come up with a more adaptable solution.
We decided to use a coarse-to-fine approach to train our networks for autonomous driving. The initial version is improved with each new client, as we obtain additional data on different locations and crops. We use this data to make our networks more accurate and reliable, employing unsupervised domain adaptation to recalibrate them in a short time by adding carefully randomized noise and distortions to the training images to make the networks more robust. Humans are still needed to help with semantic segmentation on new varieties of crops. Thanks to this approach, we have now obtained highly resilient all-purpose networks suitable for use on over a dozen different crops grown across Eastern Europe.
The way the Cognitive Agro Pilot drives a combine is similar to how a human driver does it. That is, our unique competitive edge is the system's ability to see and understand the situation in the field much as a human would, so it maintains full efficiency in collaboration with human drivers. At the end of the day, it all comes down to economics. One human-driven combine can harvest around 20 hectares of crops during one shift. When Cognitive Agro Pilot does the driving, the operators' workload is considerably lower: They don't get tired, can make fewer stops, and take fewer breaks. In practical terms, it means harvesting around 25 to 30 hectares per shift. For a business owner, it means that two combines equipped with our system deliver the performance of three combines without it.
Your browser does not support the video tag.
While the combine drives itself, the human operator can make adjustments to the harvesting system to maximize speed and efficiency.Cognitive Pilot
On the market now there are some separate developments from various agricultural-harvesting companies. But each of their autonomous features is done as a separate function—driving along a field edge, driving along a row, and so on. We haven't yet seen another industrial system that can drive completely with computer vision, but one-eyed Vasya showed us that this was possible. And so as we thought about cost optimization and solving the task with a minimum set of devices, we decided that for a farmer's AI-based robot assistant, one camera is enough.
The Cognitive Agro Pilot's primary sensor is a single 2-megapixel color video camera that can see a wide area in front of the vehicle, mounted on a bracket near one of the combine's side mirrors. A control unit with an Nvidia Jetson TX2 computer module is mounted inside the cab, with an integrated display and driver interface. This control unit contains the main stack of autonomy algorithms, processes the video feed, and issues commands to the combine's hydraulic systems for control of steering, acceleration, and braking. A display in the cab provides the interface for the driver and displays warnings and settings. We are not tied to any particular brand; our retrofit kit will work with any combine harvester model available in the farmer's fleet. For a combine more than five years old, interfacing with its control system may not be quite so easy (sometimes an additional steering-angle sensor is required), but the installation and calibration can still usually be done within one day, and it takes just 10 minutes to train a new driver.
Our vision-based system drives the combine, so the operator can focus on the harvest and adjusting the process to the specific features of the crop. The Cognitive Agro Pilot does all of the steering and maintains a precise distance between rows, minimizing gaps. It looks for obstacles, categorizes them, and forecasts their trajectory if they're moving. If there is time, it warns the driver to avoid the obstacles, or it decides to drive around them or slow down. It also coordinates its movement with a grain truck and with other combines when it is part of a formation. The only time that the operator is routinely required to drive is to turn the combine around at the end of a run. If you need to turn, go ahead—the Cognitive Agro Pilot releases the controls and starts looking for a new crop edge. As soon as it finds one, the robot says: “Let me do the driving, man.” You push the button, and it takes over. Everything is simple and intuitive. And since a run is normally up to 5 kilometers long, these turns account for less than 1 percent of a driver's workload.
Once in the field, the newly installed Cognitive Agro Pilot says: “Hurray, we're in the field,” asks the driver for permission to take over, and starts driving.
During our pilot project last year, the yield from the same fields increased by 3 to 5 percent due to the ability of the harvester to maintain the cut width without leaving unharvested areas. It increased an additional 3 percent simply because the operators had time to more closely monitor what was going on in front of them, optimizing the harvesting performance. With our copilot, drivers' workloads are very low. They start the system, let go of the steering wheel, and can concentrate on controlling the machinery or checking commodity prices on their phones. Harvesting weeks are a real ordeal for combine drivers, who get no rest except for some sleep at night. In one month they need to earn enough for the upcoming six, so they are exhausted. However, the drivers who were using our solution realized they even had some energy left, and those who chose to work long hours said they could easily work 2 hours more than usual.
Gaining 10 or 15 percent more working hours over the course of the harvest may sound negligible, but it means that a driver has three extra days to harvest the crops. Consequently, if there are days of bad weather (like rain that causes the grain to germinate or fall down), the probability of keeping the crop yield high is a lot greater. And since combine operators get paid by harvested volume, using our system helps them make more money. Ultimately, both drivers and managers say unanimously that harvesting has become easier, and typically the cost of the system (about US $10,000) is paid off in just one season. Combine drivers quickly get the hang of our technology—after the first few days, many drivers either start to trust in our robot as an almighty intelligence, or decide to test it to death. Some get the misconception that our robots think like humans and are a little disappointed to see that our system underperforms at night and has trouble driving in dust when multiple combines are driving in file. Even though humans can have problems in these situations also, operators would grumble: “How can it not see?” A human driver understands that the distance to the combine ahead is about 10 meters and that they are traveling at a constant speed. The dust cloud will blow away in a minute, and everything will be fine. No need to brake. Alex, the driver of the combine ahead, definitely won't brake. Or will he? Since the system hasn't spent years alongside Alex and cannot use life experience to predict his actions, it stops the combine and releases the controls. This is where human intelligence once again wins out over AI.
Turns at the end of each run are also left to human intelligence, for now. This feature never failed to amaze combine drivers but turned out to be the most challenging during tests: The immense width of the header means that a huge number of hypotheses about objects beyond the line of sight of our single camera need to be factored in. To automate this feature, we're waiting for the completion of tests on rugged terrain. We are also experimenting with our own synthetic-aperture radar technology, which can see crop edges and crop rows as radio-frequency images. This does not add much to the total solution cost, and we plan to use radar for advanced versions of our “agrodroids” intended for work in low visibility and at night.
Robot in Disguise
It takes just four parts to transform almost any human-driven combine harvester into a robot. A camera [1] mounted on a side-view mirror watches the field ahead, sending a video stream to a combined computing unit, display, and driver interface [2] in the driver's cab. A neural network analyzes the video to find crop edges and obstacles, and sends commands to the hydraulic unit [3] to control the combine. For older combines, a steering sensor [4] mounted inside a wheel provides directional feedback for precision driving. While Cognitive Pilot's system takes care of the driving, it's the job of the human operator in the cab to optimize the performance of the header [5] to harvest the crop efficiently.Cognitive Pilot
During the summer and autumn of 2020, more than 350 autonomous combines equipped with the Cognitive Agro Pilot system drove across over 160,000 hectares of fields and helped their human supervisors harvest more than 720,000 tonnes of crops from Kaliningrad on the Baltic Sea to Vladivostok in the Russian Far East. Our robots have worked more than 230,000 hours, passing 950,000 autonomous kilometers driven last year. And by the end of 2021, our system will be available in the United States and South America.
Common farmers and the end users of our solutions may have heard about driverless cars in the news or seen the words “neural network” a couple of times, but that about sums up their AI experience. So it is fascinating to hear them say things like “Look how well the segmentation has worked!” or “The neural network is doing great!” in the driver's cab.
Changing the technological paradigm takes time, so we ensure the widest possible compatibility of our solutions with existing machinery. Undoubtedly, as farmers adapt to the current innovations, we will continuously increase the autonomy of all types of machinery for all kinds of tasks.
A few years ago, I studied the work of the United Nations mission in Rwanda dealing with the issues of chronic child malnutrition. I will never forget the photographs of emaciated children. It made me think of the famine that gripped a besieged Leningrad during World War II. Some of my relatives died there and their diaries are a testament to the fact that there are few endings more horrible than death from starvation. I believe that robotic automation and AI enhancement of agricultural machinery used in high-risk farming areas or regions with a shortage of skilled workers should be the highest priority for all governments concerned with providing an adequate response to the global food-security challenges.
This article appears in the September 2021 print issue as “On Russian Farms, the Robotic Revolution Has Begun.” Continue reading