Tag Archives: story

#429673 Can Futurists Predict the Year of the ...

The end of the world as we know it is near. And that’s a good thing, according to many of the futurists who are predicting the imminent arrival of what’s been called the technological singularity.
The technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans. It has been a hot topic of late.
Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin’s South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045). That’s two years before SoftBank CEO Masayoshi Son’s prediction of 2047, made at the Mobile World Congress (MWC) earlier this year.
Author of the seminal book on the topic, The Singularity Is Near, Kurzweil said during the SXSW festival that “what’s actually happening is [machines] are powering all of us. …They’re making us smarter. They may not yet be inside our bodies, but by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”
That merger of man and machine—sometimes referred to as transhumanism—is the same concept that Tesla and SpaceX CEO Elon Musk talks about when discussing development of a neural lace. For Musk, however, an interface between the human brain and computers is vital to keep our species from becoming obsolete when the singularity hits.
Musk is also the driving force behind Open AI, a billion-dollar nonprofit dedicated to ensuring the development of artificial general intelligence (AGI) is beneficial to humanity. AGI is another term for human-level intelligence. What most people refer to as AI today is weak or narrow artificial intelligence—a machine capable of “thinking” within a very narrow range of concepts or tasks.
Futurist Ben Goertzel, who among his many roles is chief scientist at financial prediction firm Aidyia Holdings and robotics company Hanson Robotics (and advisor to Singularity University), believes AGI is possible well within Kurzweil’s timeframe. The singularity is harder to predict, he says on his personal website, estimating the date anywhere between 2020 and 2100.

How soon will the technological singularity occur? https://t.co/3L1ETLpePM
— Singularity Hub (@singularityhub) March 31, 2017

“Note that we might achieve human-level AGI, radical health-span extension and other cool stuff well before a singularity—especially if we choose to throttle AGI development rate for a while in order to increase the odds of a beneficial singularity,” he writes.
Meanwhile, billionaire Son of SoftBank, a multinational telecommunications and Internet firm based in Japan, predicts superintelligent robots will surpass humans in both number and brain power by 2047.
He is putting a lot of money toward making it happen. The investment arm of SoftBank, for instance, recently bankrolled $100 million in a startup called CloudMinds for cloud-connected robots, transplanting the “brain” from the machine to the cloud. Son is also creating the world’s biggest tech venture capitalist fund to the tune of $100 billion.
“I truly believe it’s coming, that’s why I’m in a hurry—to aggregate the cash, to invest,” he was quoted as saying at the MWC.
History of prediction
Kurzweil, Son, Goertzel and others are just the latest generation of futurists who have observed that humanity is accelerating toward a new paradigm of existence, largely due to technological innovation.
There were some hints that philosophers as early as the 19th century, during the upheavals of the Industrial Revolution, recognized that the human race was a species fast-tracked for a different sort of reality. It wasn’t until the 1950s, however, when the modern-day understanding of the singularity first took form.
Mathematician John von Neumann had noted that “the ever-accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
In the 1960s, following his work with Alan Turing to decrypt Nazi communications, British mathematician I.J. Goode invoked the singularity without naming it as such.
He wrote, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”
Science fiction writer and retired mathematics and computer science professor Vernor Vinge is usually credited with coining the term “technological singularity.” His 1993 essay, The Coming Technological Singularity: How to Survive in the Post-Human Era predicted the moment of technological transcendence would come within 30 years.
Vinge explains in his essay why he thinks the term “singularity”—in cosmology, the event where space-time collapses and a black hole forms—is apt: “It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”
Prediction an inexact science
But is predicting the singularity even possible?
A paper by Stuart Armstrong et al suggests such predictions are a best guess at most. A database compiled by the Machine Intelligence Research Institute (MIRI), a nonprofit dedicated to social issues related to AGI, found 257 AI predictions from the period 1950-2012 in the scientific literature. Of these, 95 contained predictions giving timelines for AI development.
“The AI predictions in the database seem little better than random guesses,” the authors write. For example, the researchers found that “there is no evidence that expert predictions differ from those of non-experts.” They also observed a strong pattern that showed most AI prognostications fell within a certain “sweet spot”—15 to 25 years from the moment of prediction.
Others have cast doubt that the singularity is achievable in the time frames put forth by Kurzweil and Son.
Paul Allen, co-founder of Microsoft and Institute of Artificial Intelligence, among other ventures, has written that such a technological leap forward is still far in the future.
“[I]f the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress,” he writes, referring to the concept that past rates of progress can predict future rates as well.
Extinction or transcendence?
Futurist Nikola Danaylov, who manages the Singularity Weblog, says he believes a better question to ask is whether achieving the singularity is a good thing or a bad thing.
“Is that going to help us grow extinct like the dinosaurs or is it going to help us spread through the universe like Carl Sagan dreamed of?” he tells Singularity Hub. “Right now, it’s very unclear to me personally.”
Danaylov argues that the singularity orthodoxy of today largely ignores the societal upheavals already under way. The idea that “technology will save us” will not lift people out of poverty or extend human life if technological breakthroughs only benefit those with money, he says.
“I’m not convinced [the singularity is] going to happen in the way we think it’s going to happen,” he says. “I’m sure we’re missing the major implications, the major considerations.
“We have tremendous potential to make it a good thing,” he adds.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#429672 ‘Ghost in the Shell’: ...

With the new sci-fi flick "Ghost in the Shell" hitting theaters this week, Scientific American asks artificial intelligence experts which movies, if any, have gotten AI right. Continue reading

Posted in Human Robots

#429670 ‘Ghost in the Shell’: ...

With the new sci-fi flick "Ghost in the Shell" hitting theaters this week, Scientific American asks artificial intelligence experts which movies, if any, have gotten AI right. Continue reading

Posted in Human Robots

#429666 Google Chases General Intelligence With ...

For a mind to be capable of tackling anything, it has to have a memory.
Humans are exceptionally good at transferring old skills to new problems. Machines, despite all their recent wins against humans, aren’t. This is partly due to how they’re trained: artificial neural networks like Google’s DeepMind learn to master a singular task and call it quits. To learn a new task, it has to reset, wiping out previous memories and starting again from scratch.
This phenomenon, quite aptly dubbed “catastrophic forgetting,” condemns our AIs to be one-trick ponies.
Now, taking inspiration from the hippocampus, our brain’s memory storage system, researchers at DeepMind and Imperial College London developed an algorithm that allows a program to learn one task after another, using the knowledge it gained along the way.
When challenged with a slew of Atari games, the neural network flexibly adapted its strategy and mastered each game, while conventional, memory-less algorithms faltered.
“The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence,” writes the team in their paper, which was published in the journal Proceedings of the National Academy of Sciences.
“If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” says study lead author Dr. James Kirkpatrick, adding that the study overcame a “significant shortcoming” in artificial neural networks and AI.
Making Memories
This isn’t the first time DeepMind has tried to give their AIs some memory power.
Last year, the team set their eyes on a kind of external memory module, somewhat similar to a human working memory—the ability to keep things in mind while using them to reason or solve problems.
Combining a neural network with a random access memory (better known as RAM), the researchers showed that their new hybrid system managed to perform multi-step reasoning, a type of task that’s long stumped conventional AI systems.
But it had a flaw: the hybrid, although powerful, required constant communication between the two components—not an elegant solution, and a total energy sink.
In this new study, DeepMind backed away from computer storage ideas, instead zooming deep into the human memory machine—the hippocampus—for inspiration.
And for good reason. Artificial neural networks, true to their name, are loosely modeled after their biological counterparts. Made up of layers of interconnecting neurons, the algorithm takes in millions of examples and learns by adjusting the connection between the neurons—somewhat like fine-tuning a guitar.
A very similar process occurs in the hippocampus. What’s different is how the connections change when learning a new task. In a machine, the weights are reset, and anything learned is forgotten.
In a human, memories undergo a kind of selection: if they help with subsequent learning, they become protected; otherwise, they’re erased. In this way, not only are memories stored within the neuronal connections themselves (without needing an external module), they also stick around if they’re proven useful.
This theory, called “synaptic consolidation,” is considered a fundamental aspect of learning and memory in the brain. So of course, DeepMind borrowed the idea and ran with it.
Crafting an Algorithm
The new algorithm mimics synaptic consolidation in a simple way.
After learning a game, the algorithm pauses and figures out how helpful each connection was to the task. It then keeps the most useful parts and makes those connections harder to change as it learns a new skill.
"[This] way there is room to learn the new task but the changes we've applied do not override what we've learned before,” says Kirkpatrick.
Think of it like this: visualize every connection as a spring with different stiffness. The more important a connection is for successfully tackling a task, the stiffer it becomes and thus subsequently harder to change.
“For this reason, we called our algorithm Elastic Weight Consolidation (EWC),” the authors explained in a blog post introducing the algorithm.
Game On
To test their new algorithm, the team turned to DeepMind’s favorite AI training ground: Atari games.
Previously, the company unveiled a neural network-based AI called Deep Q-Network (DQN) that could teach itself to play Atari games as well as any human player. From Space Invaders to Pong, the AI mastered our nostalgic favorites, but only one game at a time.

"After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player."

The team now pitted their memory-enhanced DQN against its classical version, and put the agents through a random selection of ten Atari games. After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player.
In stark contrast, without the memory boost, the classical algorithm could barely play a single game by the end of training. This was partly because the AI never learned to play more than one game and always forgot what it had learned when moving on to a new one.
“Today, computer programs cannot learn from data adaptively and in real time.
We have shown that catastrophic forgetting is not an insurmountable challenge for neural networks,” the authors say.
Machine Brain
That’s not to say EWC is perfect.
One issue is the possibility of a “blackout catastrophe”: since the connections in EWC can only become less plastic over time, eventually the network saturates. This locks the network into a single unchangeable state, during which it can no longer retrieve memories or store new information.
That said, “We did not observe these limitations under the more realistic conditions for which EWC was designed—likely because the network was operating well under capacity in these regimes,” explained the authors.
Performance wise, the algorithm was a sort of “jack-of-all-trades”: decent at plenty, master of none. Although the network retained knowledge from learning each game, its performance for any given game was worse than traditional neural networks dedicated to that one game.
One possible stumbling block is that the algorithm may not have accurately judged the importance of certain connections in each game, which is something that needs to be further optimized, explain the authors.
“We have demonstrated that [EWC] can learn tasks sequentially, but we haven’t shown that it learns them better because it learns them sequentially,” says Kirkpatrick. “There’s still room for improvement.”

"The team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence."

But the team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence, in which AIs achieve the kind of adaptive learning and reasoning that come to humans naturally.
What’s more, the work could also feedback into neurobiological theories of learning.
Synaptic consolidation was previously only proven in very simple examples. Here we showed that the same theories can be applied in a more realistic and complex context—it really shows that the theory could be key to retaining our memories and know-how, explained the authors. After all, to emulate is to understand.
Over the past decade, neuroscience and machine learning have become increasingly intertwined. And no doubt our mushy thinking machines have more to offer their silicon brethren, and vice-versa.
“We hope that this research represents a step towards programs that can learn in a more flexible and efficient way,” the authors say.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#429661 Video Friday: Robotics for Happiness, ...

Your weekly selection of awesome robot videos Continue reading

Posted in Human Robots