Tag Archives: children
#431836 Do Our Brains Use Deep Learning to Make ...
The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading
#431385 Here’s How to Get to Conscious ...
“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading
#431362 Does Regulating Artificial Intelligence ...
Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity—or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook’s Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic.
As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I’ve seen how beneficial it can be. I’ve developed AI software that lets robots working in teams make individual decisions as part of collective efforts to explore and solve problems. Researchers are already subject to existing rules, regulations and laws designed to protect public safety. Imposing further limitations risks reducing the potential for innovation with AI systems.
How is AI regulated now?
While the term “artificial intelligence” may conjure fantastical images of human-like robots, most people have encountered AI before. It helps us find similar products while shopping, offers movie and TV recommendations, and helps us search for websites. It grades student writing, provides personalized tutoring, and even recognizes objects carried through airport scanners.
In each case, the AI makes things easier for humans. For example, the AI software I developed could be used to plan and execute a search of a field for a plant or animal as part of a science experiment. But even as the AI frees people from doing this work, it is still basing its actions on human decisions and goals about where to search and what to look for.
In areas like these and many others, AI has the potential to do far more good than harm—if used properly. But I don’t believe additional regulations are currently needed. There are already laws on the books of nations, states, and towns governing civil and criminal liabilities for harmful actions. Our drones, for example, must obey FAA regulations, while the self-driving car AI must obey regular traffic laws to operate on public roadways.
Existing laws also cover what happens if a robot injures or kills a person, even if the injury is accidental and the robot’s programmer or operator isn’t criminally responsible. While lawmakers and regulators may need to refine responsibility for AI systems’ actions as technology advances, creating regulations beyond those that already exist could prohibit or slow the development of capabilities that would be overwhelmingly beneficial.
Potential risks from artificial intelligence
It may seem reasonable to worry about researchers developing very advanced artificial intelligence systems that can operate entirely outside human control. A common thought experiment deals with a self-driving car forced to make a decision about whether to run over a child who just stepped into the road or veer off into a guardrail, injuring the car’s occupants and perhaps even those in another vehicle.
Musk and Hawking, among others, worry that a hyper-capable AI system, no longer limited to a single set of tasks like controlling a self-driving car, might decide it doesn’t need humans anymore. It might even look at human stewardship of the planet, the interpersonal conflicts, theft, fraud, and frequent wars, and decide that the world would be better without people.
Science fiction author Isaac Asimov tried to address this potential by proposing three laws limiting robot decision-making: Robots cannot injure humans or allow them “to come to harm.” They must also obey humans—unless this would harm humans—and protect themselves, as long as this doesn’t harm humans or ignore an order.
But Asimov himself knew the three laws were not enough. And they don’t reflect the complexity of human values. What constitutes “harm” is an example: Should a robot protect humanity from suffering related to overpopulation, or should it protect individuals’ freedoms to make personal reproductive decisions?
We humans have already wrestled with these questions in our own, non-artificial intelligences. Researchers have proposed restrictions on human freedoms, including reducing reproduction, to control people’s behavior, population growth, and environmental damage. In general, society has decided against using those methods, even if their goals seem reasonable. Similarly, rather than regulating what AI systems can and can’t do, in my view it would be better to teach them human ethics and values—like parents do with human children.
Artificial intelligence benefits
People already benefit from AI every day—but this is just the beginning. AI-controlled robots could assist law enforcement in responding to human gunmen. Current police efforts must focus on preventing officers from being injured, but robots could step into harm’s way, potentially changing the outcomes of cases like the recent shooting of an armed college student at Georgia Tech and an unarmed high school student in Austin.
Intelligent robots can help humans in other ways, too. They can perform repetitive tasks, like processing sensor data, where human boredom may cause mistakes. They can limit human exposure to dangerous materials and dangerous situations, such as when decontaminating a nuclear reactor, working in areas humans can’t go. In general, AI robots can provide humans with more time to pursue whatever they define as happiness by freeing them from having to do other work.
Achieving most of these benefits will require a lot more research and development. Regulations that make it more expensive to develop AIs or prevent certain uses may delay or forestall those efforts. This is particularly true for small businesses and individuals—key drivers of new technologies—who are not as well equipped to deal with regulation compliance as larger companies. In fact, the biggest beneficiary of AI regulation may be large companies that are used to dealing with it, because startups will have a harder time competing in a regulated environment.
The need for innovation
Humanity faced a similar set of issues in the early days of the internet. But the United States actively avoided regulating the internet to avoid stunting its early growth. Musk’s PayPal and numerous other businesses helped build the modern online world while subject only to regular human-scale rules, like those preventing theft and fraud.
Artificial intelligence systems have the potential to change how humans do just about everything. Scientists, engineers, programmers, and entrepreneurs need time to develop the technologies—and deliver their benefits. Their work should be free from concern that some AIs might be banned, and from the delays and costs associated with new AI-specific regulations.
This article was originally published on The Conversation. Read the original article.
Image Credit: Tatiana Shepeleva / Shutterstock.com Continue reading