Tag Archives: Team

#439006 Low-Cost Drones Learn Precise Control ...

I’ll admit to having been somewhat skeptical about the strategy of dangling payloads on long tethers for drone delivery. I mean, I get why Wing does it— it keeps the drone and all of its spinny bits well away from untrained users while preserving the capability of making deliveries to very specific areas that may have nearby obstacles. But it also seems like you’re adding some risk as well, because once your payload is out on that long tether, it’s more or less out of your control in at least two axes. And you can forget about your drone doing anything while this is going on, because who the heck knows what’s going to happen to your payload if the drone starts moving around?

NYU roboticists, that’s who.

This research is by Guanrui Li, Alex Tunchez, and Giuseppe Loianno at the Agile Robotics and Perception Lab (ARPL) at NYU. As you can see from the video, the drone makes keeping rock-solid control over that suspended payload look easy, but it’s very much not, especially considering that everything you see is running onboard the drone itself at 500Hz— all it takes is an IMU and a downward-facing monocular camera, along with the drone’s Snapdragon processor.

To get this to work, the drone has to be thinking about two things. First, there’s state estimation, which is the behavior of the drone itself along with its payload at the end of the tether. The drone figures this out by watching how the payload moves using its camera and tracking its own movement with its IMU. Second, there’s predicting what the payload is going to do next, and how that jibes (or not) with what the drone wants to do next. The researchers developed a model predictive control (MPC) system for this, with some added perception constraints to make sure that the behavior of the drone keeps the payload in view of the camera.

At the moment, the top speed of the system is 4 m/s, but it sounds like rather than increasing the speed of a single payload-swinging drone, the next steps will be to make the overall system more complicated by somehow using multiple drones to cooperatively manage tethered payloads that are too big or heavy for one drone to handle alone.

For more on this, we spoke with Giuseppe Loianno, head of the ARPL.

IEEE Spectrum: We've seen some examples of delivery drones delivering suspended loads. How will this work improve their capabilities?

Giuseppe Loianno: For the first time, we jointly design a perception-constrained model predictive control and state estimation approaches to enable the autonomy of a quadrotor with a cable suspended payload using onboard sensing and computation. The proposed control method guarantees the visibility of the payload in the robot camera as well as the respect of the system dynamics and actuator constraints. These are critical design aspects to guarantee safety and resilience for such a complex and delicate task involving transportation of objects.

The additional challenge involves the fact that we aim to solve the aforementioned problem using a minimal sensor suite for autonomous navigation made by a single camera and IMU. This is an ambitious goal since it concurrently involves estimating the load and the vehicle states. Previous approaches leverage GPS or motion capture systems for state estimation and do not consider the perception and physical constraints when solving the problem. We are confident that our solution will contribute to making a reality the autonomous delivery process in warehouses or in dense urban areas where the GPS signal is currently absent or shadowed.

Will it make a difference to delivery systems that use an actuated cable and only leave the load suspended for the delivery itself?

This is certainly an interesting question. We believe that adding an actuated cable will introduce more disadvantages than benefits. Certainly, an actuated cable can be leveraged to compensate for cable's swinging motions in windy conditions and/or increase the delivery precision. However, the introduction of additional actuated mechanisms and components come at the price of an increased system mass and inertia. This will reduce the overall flight time and the vehicle’s agility as well as the system resilience with respect to the transportation task. Finally, active mechanisms are also more difficult to design compared to passive ones.

What's challenging about doing all of this on-vehicle?

There are several challenges to solve on-board this problem. First, it is very difficult to concurrently run perception and action on such computationally constrained platforms in real-time. Second, the first aspect becomes even more challenging if we consider as in our case a perception-based constrained receding horizon control problem that aims to guarantee the visibility of the payload during the motion, while concurrently respecting all the system physical and sensing limitations. Finally, it has been challenging to run the entire system at a high rate to fully unleash the system’s agility. We are currently able to reach rates of 500 Hz.

Can your method adapt to loads of varying shapes, sizes, and masses? What about aerodynamics or flying in wind?

Technically, our approach can easily be adapted to varying objects sizes and masses. Our previous contributions have already shown the ability to estimate online changes in the vehicle/load configuration and can potentially be used to operate the proposed system in dynamic conditions, where the load’s characteristics are unknown and/or may vary across consecutive flights. This can be useful for both package delivery or warehouse operations, where different types of objects need to be transported or manipulated.

The aerodynamics problem is a great point. Overall, our past work has investigated the aerodynamics of wind disturbances for a single robot without a load. Formulating these problems for the proposed system is challenging and is still an open research question. We have some ideas to approach this problem combining Bayesian estimation techniques with more recent machine learning approaches and we will tackle it in the near future.

What are the limitations on the performance of the system? How fast and agile can it be with a suspended payload?

The limits of the performances are established by the actuating and sensing system. Our approach intrinsically considers both physical and sensing limitations of our system. From a sensing and computation perspective, we believe to be close to the limits with speeds of up to 4 m/s. Faster speeds can potentially introduce motion blur while decreasing the load tracking precision. Moreover, faster motions will increase as well aerodynamic disturbances that we have just mentioned. In the future, modeling these phenomena and their incorporation in the proposed solution can further push the agility.

Your paper talks about extending this approach to multiple vehicles cooperatively transporting a payload, can you tell us more about that?

We are currently working on a distributed perception and control approach for cooperative transportation. We already have some very exciting results that we will share with you very soon! Overall, we can employ a team of aerial robots to cooperatively transport a payload to increase the payload capacity and endow the system with additional resilience in case of vehicles’ failures. A cooperative cable suspended payload cooperative transportation system allows as well to concurrently and independently control the load’s position and orientation. This is not possible just using rigid connections. We believe that our approach will have a strong impact in real-world settings for delivery and constructions in warehouses and GPS-denied environments such as dense urban areas. Moreover, in post disaster scenarios, a team of physically interconnected aerial robots can deliver supplies and establish communication in areas where GPS signal is intermittent or unavailable.

PCMPC: Perception-Constrained Model Predictive Control for Quadrotors with Suspended Loads using a Single Camera and IMU, by Guanrui Li, Alex Tunchez, and Giuseppe Loianno from NYU, will be presented (virtually) at ICRA 2021.

<Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#439000 Can AI Stop People From Believing Fake ...

Machine learning algorithms provide a way to detect misinformation based on writing style and how articles are shared.

On topics as varied as climate change and the safety of vaccines, you will find a wave of misinformation all over social media. Trust in conventional news sources may seem lower than ever, but researchers are working on ways to give people more insight on whether they can believe what they read. Researchers have been testing artificial intelligence (AI) tools that could help filter legitimate news. But how trustworthy is AI when it comes to stopping the spread of misinformation?

Researchers at the Rensselaer Polytechnic Institute (RPI) and the University of Tennessee collaborated to study the role of AI in helping people identify whether the news they’re reading is legitimate or not.

The research paper, “Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments,” was published in Computers in Human Behavior Reports. It discussed how crowdsourcing marketplace Amazon Mechanical Turk (AMT) can be used to identify misinformation for fresh news and specific heuristics, which are rules of thumb used to process information and consider its veracity. In other words, heuristics are essentially “shortcuts for decisions,” explained Dorit Nevo, an associate professor at RPI’s Lally School of Management and a lead author for the paper.

The study found that AI would be successful in flagging false stories only if the reader did not already have an opinion on the topic, Nevo said. When study subjects were set in their beliefs, confirmation bias kept them from reassessing their views.

Nevo said the first part of the project focused on whether subjects could detect misinformation around climate change and vaccines like the one designed to prevent chicken pox. Then, beginning in April 2020, her team studied how people responded to news related to COVID-19.

“With COVID-19, there was a significant difference,” Nevo said. They found that about 72 percent of respondents could identify misinformation about the coronavirus without heuristic clues, and roughly 93 percent were able to be convinced by the researcher’s heuristics that the content was fake.

Examples of heuristic clues include text with too many capital letters or the use of strong language, Nevo said.

There were two types of heuristics mentioned in the team’s paper: objective heuristics and source heuristics. They put a statement at the top of each article the subjects read; it instructed them to read the article and indicate whether they believed its central thesis.

“We either put a statement that says the AI finds this article reliable and accurate based on the objective heuristics, or we said the AI finds the source reliable,” Nevo said. “So that's the source heuristic.”

In her research on heuristics, Nevo found that people’s thinking takes one of two paths: The first path is to read the article, think about it and decide if they believe it; the second is to consider the source and what others think about the news, and decide whether to believe it before reading it.

Image: Dorit Nevo/RPI/IEEE Spectrum

Researchers at RPI researched the role of heuristics and AI in detecting whether people thought news was credible

Another research paper, “Timing Matters When Correcting Fake News,” published in the Proceedings of the National Academy of Science by researchers at Harvard University, differed from the RPI researchers in its findings. While Nevo and her collaborators found that it’s easier to convince people that a story is fake news before reading it, the Harvard researchers, led by Nadia M. Brashier, a psychologist and neuroscientist, discovered that a fact-check can convince people of misinformation even after reading headlines. When study subjects read true or false labels after reading a headline, that resulted in a 25.3 percent reduction in “subsequent misclassification,” when compared to headlines with no tag, Brashier and her team found.

In the end, fighting misinformation will require both computing and human efforts such as policy changes, says Benjamin D. Horne, an assistant professor of Information Sciences at the University of Tennessee and one of Nevo’s co-authors. He says the RPI-Tennessee work was inspired by AI tools he designed previously. Horne was previously a research assistant at RPI, where he developed machine learning (ML) algorithms that can detect partial truths as well as decontextualized truths and out-of-date information.

“Our algorithms are trained on source-level behavior, both when using the textual content of an article and the network of other news sources that it draws news from,” Horne said. “We have found that these two types of features together are quite good at distinguishing between sources labeled as reliable or unreliable by external news source ratings.”

The machine learning algorithms analyze the writing style and the content-sharing behavior of news outlets, Horne said. Researchers trained a supervised ML algorithm called Random Forest, a classification algorithm that uses decision trees.

AI for Detecting Fake News

So, what’s the potential for AI to be successful in detecting misinformation?

“The tools we have developed, and other tools developed in this area, have fairly high accuracy in lab settings,” says Horne. “For example, our most recent technical work showed around 83% accuracy in predicting when the source of a news article is reliable or unreliable.”

Despite the effectiveness of algorithms, old-fashioned fact-checking by journalists will still be required to combat fake news. AI could filter the information for fact-checkers to verify, according to Horne.

“AI tools are great at dealing with high quantities of information at fast speeds but lack the nuanced analysis that a journalist or fact-checker can provide,” Horne said. “I see a future where the two work together.” Continue reading

Posted in Human Robots

#438982 Quantum Computing and Reinforcement ...

Deep reinforcement learning is having a superstar moment.

Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.

Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.

But what if you could try multiple solutions at once?

This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.

The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.

Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.

The Bottleneck
Learning from trial and error comes intuitively to our brains.

Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)

Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.

“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.

Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.

The Hybrid AI
The new study went straight for that previously untenable jugular.

The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.

At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.

This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.

“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.

It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.

The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.

The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.

In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”

A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.

The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.

Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.

“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.

Image Credit: Oleg Gamulinskiy from Pixabay Continue reading

Posted in Human Robots

#438886 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
This Chip for AI Works Using Light, Not Electrons
Will Knight | Wired
“As demand for artificial intelligence grows, so does hunger for the computer power needed to keep AI running. Lightmatter, a startup born at MIT, is betting that AI’s voracious hunger will spawn demand for a fundamentally different kind of computer chip—one that uses light to perform key calculations. ‘Either we invent new kinds of computers to continue,’ says Lightmatter CEO Nick Harris, ‘or AI slows down.’i”

BIOTECH
With This CAD for Genomes, You Can Design New Organisms
Eliza Strickland | IEEE Spectrum
“Imagine being able to design a new organism as easily as you can design a new integrated circuit. That’s the ultimate vision behind the computer-aided design (CAD) program being developed by the GP-write consortium. ‘We’re taking the same things we’d do for design automation in electronics, and applying them to biology,’ says Doug Densmore, an associate professor of electrical and computer engineering at Boston University.”

BIOLOGY
Hey, So These Sea Slugs Decapitate Themselves and Grow New Bodies
Matt Simon | Wired
“That’s right: It pulled a Deadpool. Just a few hours after its self-decapitation, the head began dragging itself around to feed. After a day, the neck wound had closed. After a week, it started to regenerate a heart. In less than a month, the whole body had grown back, and the disembodied slug was embodied once more.”

INTERNET
Move Over, Deep Nostalgia, This AI App Can Make Kim Jong-un Sing ‘I Will Survive’
Helen Sullivan | The Guardian
“If you’ve ever wanted to know what it might be like to see Kim Jong-un let loose at karaoke, your wish has been granted, thanks to an app that lets users turn photographs of anyone—or anything remotely resembling a face—into uncanny AI-powered videos of them lip syncing famous songs.”

ENERGY
GM Unveils Plans for Lithium-Metal Batteries That Could Boost EV Range
Steve Dent | Engadget
“GM has released more details about its next-generation Ultium batteries, including plans for lithium-metal (Li-metal) technology to boost performance and energy density. The automaker announced that it has signed an agreement to work with SolidEnergy Systems (SES), an MIT spinoff developing prototype Li-metal batteries with nearly double the capacity of current lithium-ion cells.”

TECHNOLOGY
Xi’s Gambit: China Plans for a World Without American Technology
Paul Mozur and Steven Lee Myers | The New York Times
“China is freeing up tens of billions of dollars for its tech industry to borrow. It is cataloging the sectors where the United States or others could cut off access to crucial technologies. And when its leaders released their most important economic plans last week, they laid out their ambitions to become an innovation superpower beholden to none.”

SCIENCE
Imaginary Numbers May Be Essential for Describing Reality
Charlie Wood | Wired
“…physicists may have just shown for the first time that imaginary numbers are, in a sense, real. A group of quantum theorists designed an experiment whose outcome depends on whether nature has an imaginary side. Provided that quantum mechanics is correct—an assumption few would quibble with—the team’s argument essentially guarantees that complex numbers are an unavoidable part of our description of the physical universe.”

PHILOSOPHY
What Is Life? Its Vast Diversity Defies Easy Definition
Carl Zimmer | Quanta
“i‘It is commonly said,’ the scientists Frances Westall and André Brack wrote in 2018, ‘that there are as many definitions of life as there are people trying to define it.’ …As an observer of science and of scientists, I find this behavior strange. It is as if astronomers kept coming up with new ways to define stars. …With scientists adrift in an ocean of definitions, philosophers rowed out to offer lifelines.”

Image Credit: Kir Simakov / Unsplash Continue reading

Posted in Human Robots

#438801 This AI Thrashes the Hardest Atari Games ...

Learning from rewards seems like the simplest thing. I make coffee, I sip coffee, I’m happy. My brain registers “brewing coffee” as an action that leads to a reward.

That’s the guiding insight behind deep reinforcement learning, a family of algorithms that famously smashed most of Atari’s gaming catalog and triumphed over humans in strategy games like Go. Here, an AI “agent” explores the game, trying out different actions and registering ones that let it win.

Except it’s not that simple. “Brewing coffee” isn’t one action; it’s a series of actions spanning several minutes, where you’re only rewarded at the very end. By just tasting the final product, how do you learn to fine-tune grind coarseness, water to coffee ratio, brewing temperature, and a gazillion other factors that result in the reward—tasty, perk-me-up coffee?

That’s the problem with “sparse rewards,” which are ironically very abundant in our messy, complex world. We don’t immediately get feedback from our actions—no video-game-style dings or points for just grinding coffee beans—yet somehow we’re able to learn and perform an entire sequence of arm and hand movements while half-asleep.

This week, researchers from UberAI and OpenAI teamed up to bestow this talent on AI.

The trick is to encourage AI agents to “return” to a previous step, one that’s promising for a winning solution. The agent then keeps a record of that state, reloads it, and branches out again to intentionally explore other solutions that may have been left behind on the first go-around. Video gamers are likely familiar with this idea: live, die, reload a saved point, try something else, repeat for a perfect run-through.

The new family of algorithms, appropriately dubbed “Go-Explore,” smashed notoriously difficult Atari games like Montezuma’s Revenge that were previously unsolvable by its AI predecessors, while trouncing human performance along the way.

It’s not just games and digital fun. In a computer simulation of a robotic arm, the team found that installing Go-Explore as its “brain” allowed it to solve a challenging series of actions when given very sparse rewards. Because the overarching idea is so simple, the authors say, it can be adapted and expanded to other real-world problems, such as drug design or language learning.

Growing Pains
How do you reward an algorithm?

Rewards are very hard to craft, the authors say. Take the problem of asking a robot to go to a fridge. A sparse reward will only give the robot “happy points” if it reaches its destination, which is similar to asking a baby, with no concept of space and danger, to crawl through a potential minefield of toys and other obstacles towards a fridge.

“In practice, reinforcement learning works very well, if you have very rich feedback, if you can tell, ‘hey, this move is good, that move is bad, this move is good, that move is bad,’” said study author Joost Huinzinga. However, in situations that offer very little feedback, “rewards can intentionally lead to a dead end. Randomly exploring the space just doesn’t cut it.”

The other extreme is providing denser rewards. In the same robot-to-fridge example, you could frequently reward the bot as it goes along its journey, essentially helping “map out” the exact recipe to success. But that’s troubling as well. Over-holding an AI’s hand could result in an extremely rigid robot that ignores new additions to its path—a pet, for example—leading to dangerous situations. It’s a deceptive AI solution that seems effective in a simple environment, but crashes in the real world.

What we need are AI agents that can tackle both problems, the team said.

Intelligent Exploration
The key is to return to the past.

For AI, motivation usually comes from “exploring new or unusual situations,” said Huizinga. It’s efficient, but comes with significant downsides. For one, the AI agent could prematurely stop going back to promising areas because it thinks it had already found a good solution. For another, it could simply forget a previous decision point because of the mechanics of how it probes the next step in a problem.

For a complex task, the end result is an AI that randomly stumbles around towards a solution while ignoring potentially better ones.

“Detaching from a place that was previously visited after collecting a reward doesn’t work in difficult games, because you might leave out important clues,” Huinzinga explained.

Go-Explore solves these problems with a simple principle: first return, then explore. In essence, the algorithm saves different approaches it previously tried and loads promising save points—once more likely to lead to victory—to explore further.

Digging a bit deeper, the AI stores screen caps from a game. It then analyzes saved points and groups images that look alike as a potential promising “save point” to return to. Rinse and repeat. The AI tries to maximize its final score in the game, and updates its save points when it achieves a new record score. Because Atari doesn’t usually allow people to revisit any random point, the team used an emulator, which is a kind of software that mimics the Atari system but with custom abilities such as saving and reloading at any time.

The trick worked like magic. When pitted against 55 Atari games in the OpenAI gym, now commonly used to benchmark reinforcement learning algorithms, Go-Explore knocked out state-of-the-art AI competitors over 85 percent of the time.

It also crushed games previously unbeatable by AI. Montezuma’s Revenge, for example, requires you to move Pedro, the blocky protagonist, through a labyrinth of underground temples while evading obstacles such as traps and enemies and gathering jewels. One bad jump could derail the path to the next level. It’s a perfect example of sparse rewards: you need a series of good actions to get to the reward—advancing onward.

Go-Explore didn’t just beat all levels of the game, a first for AI. It also scored higher than any previous record for reinforcement learning algorithms at lower levels while toppling the human world record.

Outside a gaming environment, Go-Explore was also able to boost the performance of a simulated robot arm. While it’s easy for humans to follow high-level guidance like “put the cup on this shelf in a cupboard,” robots often need explicit training—from grasping the cup to recognizing a cupboard, moving towards it while avoiding obstacles, and learning motions to not smash the cup when putting it down.

Here, similar to the real world, the digital robot arm was only rewarded when it placed the cup onto the correct shelf, out of four possible shelves. When pitted against another algorithm, Go-Explore quickly figured out the movements needed to place the cup, while its competitor struggled with even reliably picking the cup up.

Combining Forces
By itself, the “first return, then explore” idea behind Go-Explore is already powerful. The team thinks it can do even better.

One idea is to change the mechanics of save points. Rather than reloading saved states through the emulator, it’s possible to train a neural network to do the same, without needing to relaunch a saved state. It’s a potential way to make the AI even smarter, the team said, because it can “learn” to overcome one obstacle once, instead of solving the same problem again and again. The downside? It’s much more computationally intensive.

Another idea is to combine Go-Explore with an alternative form of learning, called “imitation learning.” Here, an AI observes human behavior and mimics it through a series of actions. Combined with Go-Explore, said study author Adrien Ecoffet, this could make more robust robots capable of handling all the complexity and messiness in the real world.

To the team, the implications go far beyond Go-Explore. The concept of “first return, then explore” seems to be especially powerful, suggesting “it may be a fundamental feature of learning in general.” The team said, “Harnessing these insights…may be essential…to create generally intelligent agents.”

Image Credit: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, and Jeff Clune Continue reading

Posted in Human Robots