Tag Archives: humanoid
#429643 Is the Brain More Powerful Than We ...
If you’ve ever played around with an old music amplifier, you probably know what a firing neuron sounds like.
A sudden burst of static? Check. A rapid string of pops, like hundreds of bursting balloons? Check. A rough, scratchy bzzzz that unexpectedly assaults your ears? Check again.
Neuroscientists have long used an impressive library of tools to eavesdrop on the electrical chattering of neurons in lab animals. Like linguists deciphering an alien communication, scientists carefully dissect the patterns of neural firing to try to distill the grammatical rules of the brain—the “neural code.”
By cracking the code, we may be able to emulate the way neurons communicate, potentially leading to powerful computers that work like the brain.
It’s been a solid strategy. But as it turns out, scientists may have been only brushing the surface—and missing out on a huge part of the neural conversation.
Recently, a team from UCLA discovered a hidden layer of neural communication buried within the long, tortuous projections of neurons—the dendrites. Rather than acting as passive conductors of neuronal signals, as previously thought, the scientists discovered that dendrites actively generate their own spikes—five times larger and more frequently than the classic spikes stemming from neuronal bodies (dubbed “soma” in academic spheres).
"It’s like suddenly discovering that cables leading to your computer’s CPU can also process information—utterly bizarre, and somewhat controversial."
“Knowing [dendrites] are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information,” says Dr. Mayank Mehta, who led the study.
These findings suggest that learning may be happening at the level of dendrites rather than neurons, using fundamentally different rules than previously thought, Mehta explained to Singularity Hub.
Recording pains
How has such a wealth of computational power previously escaped scientists’ watchful eyes?
Part of it is mainstream neuroscience theory. According to standard teachings, dendrites are passive cables that shuttle electrical signals to the neuronal body, where all the computation occurs. If the integrated signals reach a certain threshold, the cell body generates a sharp electrical current—a spike—that can be measured by sophisticated electronics and amplifiers. These cell body spikes are believed to be the basis of our cognitive abilities, so of course, neuroscientists have turned their focus to deciphering their meanings.
But recent studies in brain slices suggest that the story’s more complicated. When recording from dendrites on neurons in a dish, scientists noticed telltale signs that they may also generate spikes, independent of the cell body. It’s like suddenly discovering that cables leading to your computer’s CPU can also process information—utterly bizarre, and somewhat controversial.
Although these dendritic spikes (or “dendritic action potentials”) have been observed in slices and anesthetized animals, whether they occur in awake animals and contribute to behavior is an open question, explains the team in their paper.
To answer the question, the team decided to record from dendrites in animals going about their daily business. It’s a gigantic challenge: the average diameter of a dendrite is 100 times smaller than a single human hair—imagine trying to hit one with an electrode amongst a jungle of intertwined projections in the brain, without damaging anything else, while the animal is walking around!
Then there’s the actual recording aspect. Scientists usually carefully puncture the membrane with a sharp electrode to pick up signals from the cell body. Do the same to a delicate dendrite, and it shreds into tiny bits.
To get around all these issues, the UCLA team devised a method that allows them to place their electrode near, rather than inside, the dendrites of rats. After a slew of careful experiments to ensure that they were in fact picking up dendritic signals, the team finally had a tool to eavesdrop on their activity—and stream it live to computers—for the first time.
Dendritic curiosities
For four days straight, the team monitored their recordings while the rats ate, slept and navigated their way around a maze. The team implanted electrodes into a brain area that’s responsible for planning movements, called the posterior parietal cortex, and patiently waited for signs of chitchatting dendrites.
Overnight, signals appeared on the team’s computer monitor that looked like jagged ocean waves, with each protrusion signaling a spike. Not only were the dendrites firing off action potentials, they were doing so in droves. As the rats slept, the dendrites were chatting away, spiking five times more than the cell bodies from which they originate. When awake and exploring the maze, the firing rate jacked up to ten-fold.
What’s more, the dendrites were also “smart” in that they adapted their firing with time—a kind of plasticity that’s only been observed in neuronal bodies before. Since learning fundamentally relies on the ability to adapt and change, this suggests that the branches may potentially be able to “learn” on their own.
Because the dendrites are so much more active than the cell body, it suggests that a lot of activity and information processing in a neuron is happening in the dendrites without informing the cell body, says Mehta.
"Based purely on volume, because dendrites are 100 times larger than the cell body, it could mean that brains have 100 times more processing capacity than we previously thought."
This semi-independence raises a tantalizing idea: that each dendritic branch can act as a computational unit and process information, much like the US states having their own governance that works in parallel with federal oversight.
Neuroscientists have always thought that learning happens when the cell body of two neurons “fire together, wire together.” But our results indicate that learning takes place when the input neuron and dendritic spike—rather than cell body spike—happen at the same time, says Mehta.
“This is a fundamentally different learning rule,” he adds.
Curiouser and curiouser
What’s even stranger is how the dendrites managed their own activity. Neuron spikes—the cell body type—is often considered “all or none,” in that you either have an action potential or not.
Zero or one; purely digital.
While dendrites can fire digitally, in addition, they also generated large, graded fluctuations roughly twice as large as the spikes themselves.
“This large range…shows analog computation in the dendrite. This has never been seen before in any neural activity patterns,” says Mehta.
So if dendrites can compute, what are they calculating?
The answer seems to be the here and now. The team looked at how both cell body and dendrites behaved while the rats explored the maze. While the cell body shot off spikes in anticipation of a behavior—turning a corner, stopping or suddenly rushing forward—the dendrites seemed to perform their computations right when the animal does something.
“Our findings suggest [that] individual cortical neurons take information about the current state of the world, present in the dendrites, and form an anticipatory, predictive response at the soma,” explain the authors, adding that this type of computation is often seen in artificial neural network models.
The team plans to take their dendritic recordings to virtual reality in future studies, to understand how networks of neurons learn abstract concepts such as space and time.
The secret lives of neurons
What this study shows is that we’ve been underestimating the computational power of the brain, says Mehta. Based purely on volume, because dendrites are 100 times larger than the cell body, it could mean that brains have 100 times more processing capacity than we previously thought, at least for rats.
But that’s just a rough estimate. And no doubt the number will change as scientists dig even deeper into the nuances of how neurons function.
This hybrid digital-analog, dendrite-soma, duo-processor parallel computing “is a major departure from what neuroscientists have believed for about 60 years,” says Mehta. It’s like uncovering a secret life of neurons, he adds.
These findings could galvanize other fields that aim to emulate the brain, like artificial intelligence or engineering new kinds of neuron-like computer chips to dramatically boost their computational prowess.
And if repeated by other researchers in the field, our neuroscience textbooks are set with a massive overhaul.
Neurons will no longer be the basic computational unit of the brain—dendrites, with their strange analog-digital hybrid code, will take that throne.
Image Credit: NIH/NICHD/FlickrCC Continue reading →
#429641 Researchers adapt a DIY robotics kit to ...
Elementary and secondary school students who later want to become scientists and engineers often get hands-on inspiration by using off-the-shelf kits to build and program robots. But so far it's been difficult to create robotic projects to foster interest in the "wet" sciences – biology, chemistry and medicine – so called because experiments in these field often involve fluids. Continue reading →
#429640 People afraid of robots much more likely ...
"Technophobes"—people who fear robots, artificial intelligence and new technology that they don't understand—are much more likely to be afraid of losing their jobs to technology and to suffer anxiety-related mental health issues, a Baylor University study found. Continue reading →
#429638 AI Investors Rack Up Massive Returns in ...
A international team of researchers showed that artificial intelligence can make a killing on the stock market, and some real-world hedge funds are already trying it.
#429637 The Body Is the Missing Link for Truly ...
It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.
I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we're nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.
Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions — such as whether your average cat is as big as a horse, or likely to chase a mouse.
This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.
In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.
Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.
But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet — all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.
The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43 per cent of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.
Now, it’s a bit of a leap to go from smart, self-organising cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data — so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.
On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence — and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit: Patroclus by Jacques-Louis David (1780) via Wikipedia Continue reading →