Tag Archives: robotics
#429672 ‘Ghost in the Shell’: ...
With the new sci-fi flick "Ghost in the Shell" hitting theaters this week, Scientific American asks artificial intelligence experts which movies, if any, have gotten AI right. Continue reading →
#429670 ‘Ghost in the Shell’: ...
With the new sci-fi flick "Ghost in the Shell" hitting theaters this week, Scientific American asks artificial intelligence experts which movies, if any, have gotten AI right. Continue reading →
#429666 Google Chases General Intelligence With ...
For a mind to be capable of tackling anything, it has to have a memory.
Humans are exceptionally good at transferring old skills to new problems. Machines, despite all their recent wins against humans, aren’t. This is partly due to how they’re trained: artificial neural networks like Google’s DeepMind learn to master a singular task and call it quits. To learn a new task, it has to reset, wiping out previous memories and starting again from scratch.
This phenomenon, quite aptly dubbed “catastrophic forgetting,” condemns our AIs to be one-trick ponies.
Now, taking inspiration from the hippocampus, our brain’s memory storage system, researchers at DeepMind and Imperial College London developed an algorithm that allows a program to learn one task after another, using the knowledge it gained along the way.
When challenged with a slew of Atari games, the neural network flexibly adapted its strategy and mastered each game, while conventional, memory-less algorithms faltered.
“The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence,” writes the team in their paper, which was published in the journal Proceedings of the National Academy of Sciences.
“If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” says study lead author Dr. James Kirkpatrick, adding that the study overcame a “significant shortcoming” in artificial neural networks and AI.
Making Memories
This isn’t the first time DeepMind has tried to give their AIs some memory power.
Last year, the team set their eyes on a kind of external memory module, somewhat similar to a human working memory—the ability to keep things in mind while using them to reason or solve problems.
Combining a neural network with a random access memory (better known as RAM), the researchers showed that their new hybrid system managed to perform multi-step reasoning, a type of task that’s long stumped conventional AI systems.
But it had a flaw: the hybrid, although powerful, required constant communication between the two components—not an elegant solution, and a total energy sink.
In this new study, DeepMind backed away from computer storage ideas, instead zooming deep into the human memory machine—the hippocampus—for inspiration.
And for good reason. Artificial neural networks, true to their name, are loosely modeled after their biological counterparts. Made up of layers of interconnecting neurons, the algorithm takes in millions of examples and learns by adjusting the connection between the neurons—somewhat like fine-tuning a guitar.
A very similar process occurs in the hippocampus. What’s different is how the connections change when learning a new task. In a machine, the weights are reset, and anything learned is forgotten.
In a human, memories undergo a kind of selection: if they help with subsequent learning, they become protected; otherwise, they’re erased. In this way, not only are memories stored within the neuronal connections themselves (without needing an external module), they also stick around if they’re proven useful.
This theory, called “synaptic consolidation,” is considered a fundamental aspect of learning and memory in the brain. So of course, DeepMind borrowed the idea and ran with it.
Crafting an Algorithm
The new algorithm mimics synaptic consolidation in a simple way.
After learning a game, the algorithm pauses and figures out how helpful each connection was to the task. It then keeps the most useful parts and makes those connections harder to change as it learns a new skill.
"[This] way there is room to learn the new task but the changes we've applied do not override what we've learned before,” says Kirkpatrick.
Think of it like this: visualize every connection as a spring with different stiffness. The more important a connection is for successfully tackling a task, the stiffer it becomes and thus subsequently harder to change.
“For this reason, we called our algorithm Elastic Weight Consolidation (EWC),” the authors explained in a blog post introducing the algorithm.
Game On
To test their new algorithm, the team turned to DeepMind’s favorite AI training ground: Atari games.
Previously, the company unveiled a neural network-based AI called Deep Q-Network (DQN) that could teach itself to play Atari games as well as any human player. From Space Invaders to Pong, the AI mastered our nostalgic favorites, but only one game at a time.
"After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player."
The team now pitted their memory-enhanced DQN against its classical version, and put the agents through a random selection of ten Atari games. After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player.
In stark contrast, without the memory boost, the classical algorithm could barely play a single game by the end of training. This was partly because the AI never learned to play more than one game and always forgot what it had learned when moving on to a new one.
“Today, computer programs cannot learn from data adaptively and in real time.
We have shown that catastrophic forgetting is not an insurmountable challenge for neural networks,” the authors say.
Machine Brain
That’s not to say EWC is perfect.
One issue is the possibility of a “blackout catastrophe”: since the connections in EWC can only become less plastic over time, eventually the network saturates. This locks the network into a single unchangeable state, during which it can no longer retrieve memories or store new information.
That said, “We did not observe these limitations under the more realistic conditions for which EWC was designed—likely because the network was operating well under capacity in these regimes,” explained the authors.
Performance wise, the algorithm was a sort of “jack-of-all-trades”: decent at plenty, master of none. Although the network retained knowledge from learning each game, its performance for any given game was worse than traditional neural networks dedicated to that one game.
One possible stumbling block is that the algorithm may not have accurately judged the importance of certain connections in each game, which is something that needs to be further optimized, explain the authors.
“We have demonstrated that [EWC] can learn tasks sequentially, but we haven’t shown that it learns them better because it learns them sequentially,” says Kirkpatrick. “There’s still room for improvement.”
"The team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence."
But the team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence, in which AIs achieve the kind of adaptive learning and reasoning that come to humans naturally.
What’s more, the work could also feedback into neurobiological theories of learning.
Synaptic consolidation was previously only proven in very simple examples. Here we showed that the same theories can be applied in a more realistic and complex context—it really shows that the theory could be key to retaining our memories and know-how, explained the authors. After all, to emulate is to understand.
Over the past decade, neuroscience and machine learning have become increasingly intertwined. And no doubt our mushy thinking machines have more to offer their silicon brethren, and vice-versa.
“We hope that this research represents a step towards programs that can learn in a more flexible and efficient way,” the authors say.
Image Credit: Shutterstock Continue reading →
#429661 Video Friday: Robotics for Happiness, ...
Your weekly selection of awesome robot videos Continue reading →
#429660 These 6 Trends Are Retooling ...
Let’s be honest — sometimes manufacturing gets a bad rap. The industry can be seen as a behemoth — stuck in the past and slow to innovate, the victim of outsourcing and the purveyor of consumerism. Thankfully, in 2017 these stereotypes couldn’t be further from the truth.
Global organizations like GE and Caterpillar are investing in new technologies and innovation methods. Startups like Local Motors and Carbon are creating their own breakthroughs from the ground up. And organizations like the US Council on Competitiveness are working to keep these innovators moving forward. The future of manufacturing is bright.
That’s why we’ve put together this list of trends to watch in 2017. If you want to learn more about the technologies fueling these trends, meet the people leading the charge, and connect with fellow leaders, join us at Exponential Manufacturing May 17–19 in Boston.
1. Innovation Is Outpacing Policy
People around the world are talking a lot about recent and impending policy changes. How will these changes impact innovation in the coming years? And how will policy keep pace?
AI and robots continue automating factories. Self-driving trucks and ships aim to automate the transportation of materials and finished products. Even biotech is beginning to offer new ways to make things. These and other emerging technologies will impact how we live, work, and trade.
Some jobs will disappear while others take their place, efficiencies will improve, entire sub-industries (shipping, for example) could be upended by unexpected technologies—and all this will happen faster than expected.
Can society keep the pace? How do we regulate innovation without suffocating progress? How do we adopt an open-minded yet ethical approach to new opportunities? Planning for the future now is how organizations and policymakers will move toward the best scenarios and avoid the worst ones.
2. The Cutting Edge Won’t Be Cutting Edge for Long
If you’re reading Singularity Hub, you’re aware of some amazing advances happening across research fields and industries.
The deep analytical powers of machine learning are transforming raw data into useful insights. Some robots can now safely interact with people and more nimbly navigate messy work environments. 3D printers are giving physical form to digital designs. And biotechnology is beginning to make living systems, such as engineered bacteria, into microscopic chemical-producing factories.
While these are incredible innovations—and more arrive every day—one could argue the greatest challenge will be anticipating, timing, and creatively implementing the latest breakthroughs into business strategies. Those who recognize which technologies will serve their organization best, lead a culture of change, and navigate rough political waters, will come out on top.
3. Data-Driven Decision-Making Gets More Intelligent
Data has always played a critical role in manufacturing. The entire industry, from sourcing to production runs to sales forecasting, has relied on data for decades. However, the amount of data is growing exponentially larger by the day. Thanks to cheap, connected, and increasingly ubiquitous sensors (the Internet of Things), companies are able to monitor more than ever before — things like machinery, deliveries, even employees.
Companies need to leverage the latest in artificial intelligence to make the most of these incredibly large and powerful data sets. For those who do adopt new tools, smart decision-making will become clearer, easier and faster.
4. Accelerated Design and Real-World Market Testing
Historically, the product creation process has been notoriously long. Market research, focus groups, R&D, short runs, testing, sourcing, long runs…the list goes on. What if you could make a part that’s exactly like the finished product, in a series of one? What if you could design, build, test, and iterate in real life, before ramping up large-scale production?
You can, and in fact, GE is.
GE’s FirstBuild program is a state-of-the-art, community-sourced lab that lies outside their main campus and is used for the rapid prototyping of new ideas. If a product proves its worth in a sample market, the design is transferred to the main campus for full production.
These are the changes that technologies like additive manufacturing and materials science are bringing to product design. When a giant like GE creates a spinoff group to act like a startup, it becomes obvious that power is being democratized, innovation times are being slashed, and long-held competitive advantages are evaporating.
5. The Automation and Democratization of Production
Like design, new technologies are cutting the time and cost required to get products to market. However, there are larger shifts happening in the overall production process as well. Robots are becoming more nimble, more versatile and smarter. Computer-guided fabrication—both additive and subtractive—is getting faster, cheaper, and more precise. Factories are becoming more efficient, while raw material waste is decreasing. All of this increases competition, making success without these technologies nearly impossible.
On the other end of the spectrum, the spread of additive manufacturing, the boom of the maker movement, and a reduction in small machinery cost are allowing individuals to build mini-factories in their homes. What was once only possible in the largest factories is now doable in your neighbor’s garage. And while some may discount the innovative potential of the non-professionals, consider the incredible amount of human capital unlocked by this change.
6. Reimagining the Global Supply Chain
One of the most difficult sectors of manufacturing is the supply chain, from sourcing raw materials around the world to delivering finished goods on time. Supply chain managers are responsible for coordinating with hundreds, if not thousands, of partners and service providers to make sure products are delivered on time, on budget, and in good condition.
While it may not be the sexiest piece of the puzzle, it’s certainly a critical one — and it’s ripe for improvement. Self-driving trucks and ships, AI-powered planning software, and localized manufacturing facilities are all converging to reshape the very nature of supply chains.
So, we’ve highlighted six trends currently impacting the global manufacturing landscape. What does it mean, though? How do we stay ahead of these shifts? How do we know which technologies will stick and which will end up as the Betamax of the year?
Some of these questions are yet unanswerable, while some gain more clarity each day. What we do know is that this is just the beginning. As technologies converge, they will continue creating ever stronger advances, thus compounding the rate of improvement.
Manufacturing leaders should incorporate ongoing, future-oriented education as part of their annual development to stay up-to-date on new breakthroughs, learn where the industry is headed, and discover how to bring these competitive advantages into their own organizations.
Ready to start your education? Join Singularity University for Exponential Manufacturing, an event that will lead 600+ manufacturing executives, entrepreneurs, and investors through an intensive 3-day program to look into these questions, connect with like-minded leaders, and prepare for success in the year to come. Prices increase April 1st. Apply here and save up to 15% with code SUHUB2017. Continue reading →