Tag Archives: Artificial intelligence
#429710 It’s National Robotics Week!
Welcome to National Robotics Week 2017. Let's celebrate! Continue reading →
#429709 A Robot Magic Kingdom? Disney Wants ...
In a move reflective of HBO's hit show "Westworld," the entertainment company has filed a patent for humanoid robot characters. Continue reading →
#429706 Canada Hopes to Energize Homegrown AI ...
Much of the groundbreaking AI research of recent decades originated in Canada, but it’s largely Silicon Valley that’s brought it into the real world. Now Canada is looking to take back its lead with the launch of a new research hub dedicated to the technology.
The non-profit Vector Institute, launched last week, will be based in Toronto and is designed to accelerate research and commercialization of AI and machine learning technology. The federal and provincial governments have pledged 150 million Canadian dollars (about $110 million), and a group of 31 corporate donors will also support the hub’s work over the next 10 years.
The federal government is putting forward C$40 million as part of a C$125 million countrywide artificial intelligence strategy, which will see similar institutes being established in Montreal and Edmonton.
The deep learning approach that is at the heart of most cutting-edge AI research had its genesis in Canadian universities, in particular the University of Toronto, thanks to the work of godfathers of the field like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio.
But both Hinton and LeCun were lured south of the border when Silicon Valley started paying attention to the field, moving to Google and Facebook, respectively. Now Canada wants to stem the tide by attracting and retaining the field’s top global talent.
"Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that," Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program, told CBC News.
"Machine learning is seen by executives as a dark art with few acolytes, and so companies like Google, Facebook, Apple and Microsoft have been hoarding talent."
They face a major challenge, though. The AI brain drain is well documented, with technology giants snapping up academics before they’ve even finished their PhDs and start-ups before they’ve even released a product.
Machine learning is seen by executives as a dark art with few acolytes, and so Silicon Valley companies like Google, Facebook, Apple and Seattle's Microsoft have been hoarding talent. Even more traditional engineering behemoths like GE and Samsung are jumping on the bandwagon, scared of being left behind.
Competing with these companies will take more than matching salary offers. Writing for The Globe and Mail back in January, while plans for the Vector Institute were nearing fruition, Hinton, who will be the institute's chief scientific adviser, its chair Ed Clark, and several other AI experts said when they asked AI researchers why they jumped ship to California, it was rarely the money.
Instead, it was the resources these companies could put at their disposal, and the chance to solve meaningful problems. To compete on these terms, they said, it will be necessary to create a critical mass of scientists, engineers, computer resources and data. That is the aim of the institute, and it will require boosting the number of machine learning graduates, forging industrial partnerships to get access to data, and acquiring the significant funding needed to support these activities.
In their article, Hinton et al talked about trying to “lure investment from foreign data-rich companies,” and they’ve already had some success. Google is helping fund the institute and has announced its intention to open an AI lab of its own in Toronto. Last November it also invested C$4.5 million in the University of Montreal's Montreal Institute for Learning Algorithms.
Thomson Reuters and General Motors both recently moved AI labs to Toronto as well, and the Royal Bank of Canada has launched a new Research in Machine Learning lab at the University of Toronto. Foteini Agrafioti, who heads that lab, told the BBC she’s hopeful these kinds of moves can stem the tide.
"I would hate to see one more professor moving south," she says. "Really, I hope that five years from now we look back and say we almost lost it but we caught it in time and reversed it."
The institute has a couple of other carrots too. Speaking to Motherboard, Hinton said, "We can offer people the chance to do any mix of basic research and applications that they want, and they're going to have lots of data, particularly from hospitals.”
Companies like Google have traditionally given their researchers a long leash, so how tempting this would be remains to be seen. But the institute does have another trump card — the political climate south of the border might make Canada a more tempting destination than before. “I think Trump might help there,” Hinton told the Toronto Star.
Whether the gambit will pay off remains to be seen, but will rely heavily on being able to convince the government and industry to invest more in the coming years. The Institute’s finances look in good shape right now, but CIFAR’s Bernstein doesn’t sugar coat it, telling CBC "it's not enough money."
"My estimate of the right amount of money to make a difference is half a billion or so, and I think we will get there," he added.
Image Credit: Shutterstock Continue reading →
#429699 OpenAI Just Beat Google DeepMind at ...
AI research has a long history of repurposing old ideas that have gone out of style. Now researchers at Elon Musk’s open source AI project have revisited “neuroevolution,” a field that has been around since the 1980s, and achieved state-of-the-art results.
The group, led by OpenAI’s research director Ilya Sutskever, has been exploring the use of a subset of algorithms from this field, called “evolution strategies,” which are aimed at solving optimization problems.
Despite the name, the approach is only loosely linked to biological evolution, the researchers say in a blog post announcing their results. On an abstract level, it relies on allowing successful individuals to pass on their characteristics to future generations. The researchers have taken these algorithms and reworked them to work better with deep neural networks and run on large-scale distributed computing systems.
"To validate their effectiveness, they then set them to work on a series of challenges seen as benchmarks for reinforcement learning."
To validate their effectiveness, they then set them to work on a series of challenges seen as benchmarks for reinforcement learning, the technique behind many of Google DeepMind’s most impressive feats, including beating a champion Go player last year.
One of these challenges is to train the algorithm to play a variety of computer games developed by Atari. DeepMind made the news in 2013 when it showed it could use Deep Q-Learning—a combination of reinforcement learning and convolutional neural networks—to successfully tackle seven such games. The other is to get an algorithm to learn how to control a virtual humanoid walker in a physics engine.
To do this, the algorithm starts with a random policy—the set of rules that govern how the system should behave to get a high score in an Atari game, for example. It then creates several hundred copies of the policy—with some random variation—and these are tested on the game.
These policies are then mixed back together again, but with greater weight given to the policies that got the highest score in the game. The process repeats until the system comes up with a policy that can play the game well.
"In one hour training on the Atari challenge, the algorithm reached a level of mastery that took a [DeepMind] reinforcement-learning system…a whole day to learn."
In one hour training on the Atari challenge, the algorithm reached a level of mastery that took a reinforcement-learning system published by DeepMind last year a whole day to learn. On the walking problem the system took 10 minutes, compared to 10 hours for Google’s approach.
One of the keys to this dramatic performance was the fact that the approach is highly “parallelizable.” To solve the walking simulation, they spread computations over 1,440 CPU cores, while in the Atari challenge they used 720.
This is possible because it requires limited communication between the various “worker” algorithms testing the candidate policies. Scaling reinforcement algorithms like the one from DeepMind in the same way is challenging because there needs to be much more communication, the researchers say.
The approach also doesn’t require backpropagation, a common technique in neural network-based approaches, including deep reinforcement learning. This effectively compares the network’s output with the desired output and then feeds the resulting information back into the network to help optimize it.
The researchers say this makes the code shorter and the algorithm between two and three times faster in practice. They also suggest it will be particularly suited to longer challenges and situations where actions have long-lasting effects that may not become apparent until many steps down the line.
The approach does have its limitations, though. These kinds of algorithms are usually compared based on their data efficiency—the number of iterations required to achieve a specific score in a game, for example. On this metric, the OpenAI approach does worse than reinforcement learning approaches, although this is offset by the fact that it is highly parallelizable and so can carry out iterations more quickly.
For supervised learning problems like image classification and speech recognition, which currently have the most real-world applications, the approach can also be as much as 1,000 times slower than other approaches that use backpropagation.
Nevertheless, the work demonstrates promising new applications for out-of-style evolutionary approaches, and OpenAI is not the only group investigating them. Google has been experimenting on using similar strategies to devise better image recognition algorithms. Whether this represents the next evolution in AI we will have to wait and see.
Image Credit: Shutterstock Continue reading →
#429693 Ghost in the Shell Thrills, But Ducks ...
How closely will we live with the technology we use in the future? How will it change us? And how close is “close”? Ghost in the Shell imagines a futuristic, hi-tech but grimy and ghetto-ridden Japanese metropolis populated by people, robots, and technologically-enhanced human cyborgs.
Beyond the superhuman strength, resilience, and X-ray vision provided by bodily enhancements, one of the most transformative aspects of this world is the idea of brain augmentation, that as cyborgs we might have two brains rather than one. Our biological brain—the “ghost” in the “shell”—would interface via neural implants to powerful embedded computers that would give us lightning-fast reactions and heightened powers of reasoning, learning and memory.
First written as a Manga comic series in 1989 during the early days of the internet, Ghost in the Shell’s creator, Japanese artist Masamune Shirow, foresaw that this brain-computer interface would overcome the fundamental limitation of the human condition: that our minds are trapped inside our heads. In Shirow’s transhuman future our minds would be free to roam, relaying thoughts and imaginings to other networked brains, entering via the cloud into distant devices and sensors, even “deep diving” the mind of another in order to understand and share their experiences.
Shirow’s stories also pinpointed some of the dangers of this giant technological leap. In a world where knowledge is power, these brain-computer interfaces would create new tools for government surveillance and control, and new kinds of crime such as “mind-jacking”—the remote control of another’s thoughts and actions. Nevertheless, there was also a spiritual side to Shirow’s narrative: that the cyborg condition might be the next step in our evolution, and that the widening of perspective and the merging of individuality from a networking of minds could be a path to enlightenment.
Lost in translation
Borrowing heavily from Ghost in the Shell’s re-telling by director Mamoru Oshii in his classic 1995 animated film version, the newly arrived Hollywood cinematic interpretation stars Scarlett Johansson as Major, a cyborg working for Section 9, a government-run security organization charged with fighting corruption and terrorism. Directed by Rupert Sanders, the new film is visually stunning and the storyline lovingly recreates some of the best scenes from the original anime.
Sadly, though, Sanders’ movie pulls its punches around the core question of how this technology could change the human condition. Indeed, if casting Western actors in most key roles wasn’t enough, the new film also engages in a form of cultural appropriation by superimposing the myth of the American all-action hero—who you are is defined by what you do—on a character who is almost the complete antithesis of that notion.
Major fights the battles of her masters with increasing reluctance, questioning the actions asked of her, drawn to escape and contemplation. This is no action hero, but someone trying to piece together fragments of meaning from within her cyborg existence with which to assemble a worthwhile life.
A scene midway through the film shows, even more bluntly, the central role of memory in creating the self. We see the complete breakdown of a man who, having been mind-jacked, faces the realization that his identity is built on false memories of a life never lived, and a family that never existed. The 1995 anime insists that we are individuals only because of our memories. While the new film retains much of the same storyline, it refuses to follow the inference. Rather than being defined by our memories, Major’s voice tells us that “we cling to memories as if they define us, but what we do defines us.” Perhaps this is meant to be reassuring, but to me, it is both confusing and unfaithful to the spirit of the original tale.
The new film also backs away from another key idea of Shirow’s work, that the human mind—even the human species—are, in essence, information. Where the 1995 anime talked of the possibility of leaving the physical body—the shell—elevating consciousness to a higher plane and “becoming part of all things," the remake has only veiled hints that such a merging of minds, or a melding of the human mind with the internet, could be either positive or transformational.
Open lives
In the real world, the notion of networked minds is already upon us. Touchscreens, keypads, cameras, mobile, the cloud: we are more and more directly and instantly linked to a widening circle of people, while opening up our personal lives to surveillance and potential manipulation by governments, advertisers, or worse.
Brain-computer interfaces are also on their way. There are already brain implants that can mitigate some of the symptoms of brain conditions, from Parkinson’s disease to depression. Others are being developed to overcome sensory impairments such as blindness or to control a paralyzed limb. On the other hand, the remote control of behavior using implanted brain stimulators has been demonstrated in several animal species, a frightening technology that could be applied to humans if someone were to choose to misuse it in that way.
The possibility of voluntarily networking our minds is also here. Devices like the Emotiv are simple wearable electroencephalograph-based (EEG) devices that can detect some of the signature electrical signals emitted by our brains, and are sufficiently intelligent to interpret those signals and turn them into useful output. For example, an Emotiv connected to a computer can control a video game by the power of the wearer’s thoughts alone.
In terms of artificial intelligence, the work in my lab at Sheffield Robotics explores the possibility of building robot analogues of human memory for events and experiences. The fusion of such systems with the human brain is not possible with today’s technology—but it is imaginable in the decades to come. Were an electronic implant developed that could vastly improve your memory and intelligence, would you be tempted? Such technologies may be on the horizon, and science fiction imaginings such as Ghost in the Shell suggest that their power to fundamentally change the human condition should not be underestimated.
This article was originally published on The Conversation. Read the original article.
Image Credit: Paramount/YouTube Continue reading →