Tag Archives: stories
#429972 How to Build a Mind? This Theory May ...
From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It's usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published June 19, 2016. We hope you enjoy it!
How do intelligent minds learn?
Consider a toddler navigating her day, bombarded by a kaleidoscope of experiences. How does her mind discover what’s normal happenstance and begin building a model of the world? How does she recognize unusual events and incorporate them into her worldview? How does she understand new concepts, often from just a single example?
These are the same questions machine learning scientists ask as they inch closer to AI that matches — or even beats — human performance. Much of AI’s recent victories — IBM Watson against Ken Jennings, Google’s AlphaGo versus Lee Sedol — are rooted in network architectures inspired by multi-layered processing in the human brain.
In a review paper, published in Trends in Cognitive Sciences, scientists from Google DeepMind and Stanford University penned a long-overdue update on a prominent theory of how humans and other intelligent animals learn.
In broad strokes, the Complementary Learning Systems (CLS) theory states that the brain relies on two systems that allow it to rapidly soak in new information, while maintaining a structured model of the world that’s resilient to noise.
“The core principles of CLS have broad relevance … in understanding the organization of memory in biological systems,” wrote the authors in the paper.
What’s more, the theory’s core principles — already implemented in recent themes in machine learning — will no doubt guide us towards designing agents with artificial intelligence, they wrote.
Dynamic Duo
In 1995, a team of prominent psychologists sought to explain a memory phenomenon: patients with damage to their hippocampus could no longer form new memories but had full access to remote memories and concepts from their past.
Given the discrepancy, the team reasoned that new learning and old knowledge likely relied on two separate learning systems. Empirical evidence soon pointed to the hippocampus as the site of new learning, and the cortex — the outermost layer of the brain — as the seat of remote memories.
In a landmark paper, they formalized their ideas into the CLS theory.
According to CLS, the cortex is the memory warehouse of the brain. Rather than storing single experiences or fragmented knowledge, it serves as a well-organized scaffold that gradually accumulates general concepts about the world.
This idea, wrote the authors, was inspired by evidence from early AI research.
Experiments with multi-layer neural nets, the precursors to today’s powerful deep neural networks, showed that, with training, the artificial learning systems gradually learned to extract structure from the training data by adjusting connection weights — the computer equivalent to neural connections in the brain.
Put simply, the layered structure of the networks allows them to gradually distill individual experiences (or examples) into high-level concepts.
Similar to deep neural nets, the cortex is made up of multiple layers of neurons interconnected with each other, with several input and output layers. It readily receives data from other brain regions through input layers and distills them into databases (“prior knowledge”) to draw upon when needed.
“According to the theory, such networks underlie acquired cognitive abilities of all types in domains as diverse as perception, language, semantic knowledge representation and skilled action,” wrote the authors.
Perhaps unsurprisingly, the cortex is often touted as the basis of human intelligence.
Yet this system isn’t without fault. For one, it’s painfully slow. Since a single experience is considered a single “sample” in statistics, the cortex needs to aggregate over years of experience in order to build an accurate model of the world.
Another issue arises after the network matures. Information stored in the cortex is relatively faithful and stable. It’s a blessing and a curse. Consider when you need to dramatically change your perception of something after a single traumatic incident. It pays to be able to update your cortical database without having to go through multiple similar events.
But even the update process itself could radically disrupt the existing network. Jamming new knowledge into a multi-layer network, without regard for existing connections, results in intolerable changes to the network. The consequences are so dire that scientists call the phenomenon is “catastrophic interference.”
Thankfully, we have a second learning system that complements the cortex.
Unlike the slow-learning cortex, the hippocampus concerns itself with breaking news. Not only does it encode a specific event (for example, drinking your morning coffee), it also jots down the context in which the event occurred (you were in your bed checking email while drinking coffee). This lets you easily distinguish between similar events that happened at different times.
The reason that the hippocampus can encode and delineate detailed memories — even when they’re remarkably similar — is due to its peculiar connection pattern. When information flows into the structure, it activates a different neural activity pattern for each experience in the downstream pathway. Different network pattern; different memory.
In a way, the hippocampus learning system is the antithesis of its cortical counterpart: it’s fast, very specific and tailored to each individual experience. Yet the two are inextricably linked: new experiences, temporarily stored in the hippocampus, are gradually integrated into the cortical knowledge scaffold so that new learning becomes part of the databank.
But how do connections from one neural network “jump” to another?
System to System
The original CLS theory didn’t yet have an answer. In the new paper, the authors synthesized findings from recent experiments and pointed out one way system transfer could work.
Scientists don’t yet have all the answers, but the process seems to happen during rest, including sleep. By recording brain activity of sleeping rats that had been trained on a certain task the day before, scientists repeatedly found that their hippocampi produced a type of electrical activity called sharp-wave ripples (SWR) that propagate to the cortex.
When examined closely, the ripples were actually “replays” of the same neural pattern that the animal had generated during learning, but sped up to a factor of about 20. Picture fast-forwarding through a recording — that’s essentially what the hippocampus does during downtime. This speeding up process compresses peaks of neural activity into tighter time windows, which in turn boosts plasticity between the hippocampus and the cortex.
In this way, changes in the hippocampal network can correspondingly tweak neural connections in the cortex.
Unlike catastrophic interference, SWR represent a much gentler way to integrate new information into the cortical database.
Replay also has some other perks. You may remember that the cortex requires a lot of training data to build its concepts. Since a single event is often replayed many times during a sleep episode, SWRs offer a deluge of training data to the cortex.
SWR also offers a way for the brain to “hack reality” in a way that benefits the person. The hippocampus doesn’t faithfully replay all recent activation patterns. Instead, it picks rewarding events and selectively replays them to the cortex.
This means that rare but meaningful events might be given privileged status, allowing them to preferentially reshape cortical learning.
“These ideas…view memory systems as being optimized to the goals of an organism rather than simply mirroring the structure of the environment,” explained the authors in the paper.
This reweighting process is particularly important in enriching the memories of biological agents, something important to consider for artificial intelligence, they wrote.
Biological to Artificial
The two-system set-up is nature’s solution to efficient learning.
“By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences,” says Stanford psychologist and article author Dr. James McClelland in a press interview.
According to DeepMind neuroscientists Dharshan Kumaran and Demis Hassabis, both authors of the paper, CLS has been instrumental in recent breakthroughs in machine learning.
Convolutional neural networks (CNN), for example, are a type of deep network modeled after the slow-learning neocortical system. Similar to its biological muse, CNNs also gradually learn through repeated, interleaved exposure to a large amount of training data. The system has been particularly successful in achieving state-of-the-art performance in challenging object-recognition tasks, including ImageNet.
Other aspects of CLS theory, such as hippocampal replay, has also been successfully implemented in systems such as DeepMind’s Deep Q-Network. Last year, the company reported that the system was capable of learning and playing dozens of Atari 2600 games at a level comparable to professional human gamers.
“As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of gameplay and replays them in interleaved fashion. This greatly amplifies the use of actual gameplay experience and avoids the tendency for a particular local run of experience to dominate learning in the system,” explains Kumaran.
Hassabis agrees.
We believe that the updated CLS theory will likely continue to provide a framework for future research, for both neuroscience and the quest for artificial general intelligence, he says.
Image Credit: Shutterstock Continue reading →
#429966 These Robots Can Teach Other Robots How ...
One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.
Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.
Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.
In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.
First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.
The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.
“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.
“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”
The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.
"Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration."
Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.
But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.
The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.
“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.
“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”
The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.
Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.
A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.
So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.
These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent 'master learning algorithms' that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”
If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.
Image Credit: MITCSAIL/YouTube Continue reading →
#429964 French designer shows off DIY robot in ...
A French designer has shown his humanoid DIY robot to the public for the first time. Continue reading →
#429951 Watch: Where AI Is Today, and Where ...
2016 was a year of headlines in artificial intelligence. A top-selling holiday gift was the AI-powered Amazon Echo; IBM Watson was used to diagnose cancer; and Google DeepMind’s system AlphaGo cracked the ancient and complex Chinese game Go sooner than expected.
And progress continues in 2017.
Neil Jacobstein, faculty chair of Artificial Intelligence and Robotics at Singularity University, hit the audience at Singularity University’s Exponential Manufacturing Summit with some of the more significant updates in AI so far this year.
DeepMind, for example, recently outlined a new method called Elastic Weight Consolidation (EWC) to tackle “catastrophic forgetting” in machine learning. The method helps neural networks retain previously learned tasks.
And a project out of Newcastle University is taking object recognition to the next level. The researchers have created a system that’s hooked up to a robotic hand, which is learning how to uniquely approach and pick up different objects. (Think about the impact this technology may have on assembly lines.)
These are just two of a number of developments and advances moving AI ahead in 2017.
For those worried AI has become overhyped, we sat down with Jacobstein after his talk to hear firsthand about progress in the field of AI, the practical applications of the technology that he’s most excited about, and how we can prepare society for a future of AI.
Image Source: Shutterstock Continue reading →
#429950 Veo Gives Robots ‘Eyes and a Brain’ ...
The robots are coming.
Actually, they’re already here. Machines are learning to do tasks they’ve never done before, from locating and retrieving goods from a shelf to driving cars to performing surgery. In manufacturing environments, robots can place an object with millimeter precision over and over, lift hundreds of pounds without getting tired, and repeat the same action constantly for hundreds of hours.
But let’s not give robots all the glory just yet. A lot of things that are easy for humans are still hard or impossible for robots. A three-year-old child, for example, can differentiate between a dog and a cat, or intuitively scoot over when said dog or cat jumps into its play space. A computer can’t do either of these simple actions.
So how do we take the best robots have to offer and the best humans have to offer and combine them to reach new levels of output and performance?
That’s the question engineers at Veo Robotics are working to answer. At Singularity University’s Exponential Manufacturing Summit last week, Clara Vu, Veo’s cofounder and VP of Engineering, shared some of her company’s initiatives and why they’re becoming essential to today’s manufacturing world.
"Our system…essentially gives a robot arm 'eyes and a brain,'" Vu said. "Our system can understand the space, see what's around the robot, reason about it, and then control the robot so [it] can safely interact with people."
Why we’re awesome
If you think about it, we humans are pretty amazing creatures. Vu pointed out that the human visual system has wide range, precise focus, depth-rich color, and three dimensions. Our hands have five independently-articulated fingers, 29 joints, 34 muscles, and 123 tendons—and they're all covered in skin, a finely-grained material sensitive to force, temperature and touch.
Not only do we have all these tools, we have millions of years of evolution behind us that have taught us the best ways to use them. We use them for a huge variety of tasks, and we can adapt them to quickly-changing environments.
Most robots, on the other hand, know how to do one task, the same way, over and over again. Move the assembly line six inches to the right or make the load two pounds lighter, and a robot won’t be able to adapt and carry on.
Like oil and water
In today’s manufacturing environment, humans and robots don’t mix—they’re so different that it’s hard for them to work together. This leaves manufacturing engineers designing processes either entirely for robots, or entirely without them. But what if the best way to, say, attach a door to a refrigerator is to have a robot lift it, a human guide it into place, the robot put it down, and the human tighten its hinges?
Sounds simple enough, but with the big, dumb robots we have today, that’s close to impossible—and the manufacturing environment is evolving in a direction that will make it harder, not easier. “As the number of different things we want to make increases and the time between design and production decreases, we’ll want more flexibility in our processes, and it will be more difficult to use automation effectively,” Vu said.
Smaller, lighter, smarter
For people and robots to work together safely and efficiently, robots need to get smaller, lighter, and most importantly, smarter. “Autonomy is exactly what we need here,” Vu said. “At its core, autonomy is about the ability to perceive, decide and act independently.” An autonomous robot, she explained, needs to be able to answer questions like ‘where am I?’, ‘what's going on around me?’, ‘what actions are safe?’, and ‘what actions will bring me closer to my goal?’
Veo engineers are working on a responsive system to bring spatial awareness to robots. Depth-sensing cameras give the robot visual coverage, and its software learns to differentiate between the objects around it, to the point that it can be aware of the size and location of everything in its area. It can then be programmed to adjust its behavior to changes in its environment—if a human shows up where a human isn’t supposed to be, the robot can stop what it’s doing to make sure the human doesn’t get hurt.
3D sensors will also play a key part in the system, and Vu mentioned the importance of their declining costs. “Ten years ago, the only 3D sensors that were available were 3D liners that cost tens of thousands of dollars. Today, because of advances in consumer applications like gaming and gesture recognition, it's possible to get 3D time-of-flight chipsets for well under a hundred dollars, in quantity. These sensors give us exactly the kind of data we need to solve this problem,” she said.
3D sensors wouldn’t be very helpful without computers that can do something useful with all the data they collect. “Multiple sensors monitoring a large 3D area means millions of points that have to be processed in real time,” Vu noted. “Today's CPUs, and in particular GPUs, which can perform thousands of computations in parallel, are up to the task.”
A seamless future
Veo’s technology can be integrated with pre-existing robots of various sizes, types, and functionalities. The company is currently testing its prototypes with manufacturing partners, and is aiming to deploy in 2019.
Vu told the audience that industrial robots have a projected compound annual growth rate of 13 percent by 2019, and though collaborative robots account for just a small fraction of the installed base, their projected growth rate by 2019 is 67 percent.
Vu concluded with her vision of a future of seamless robot-human interaction. “We want to allow manufacturers to combine the creativity, flexibility, judgment and dexterity of humans with the strength, speed and precision of industrial robots,” she said. “We believe this will give manufacturers new tools to meet the growing needs of the modern economy.”
Image Credit: Shutterstock Continue reading →