Tag Archives: neuron

#439721 New Study Finds a Single Neuron Is a ...

Comparing brains to computers is a long and dearly held analogy in both neuroscience and computer science.

It’s not hard to see why.

Our brains can perform many of the tasks we want computers to handle with an easy, mysterious grace. So, it goes, understanding the inner workings of our minds can help us build better computers; and those computers can help us better understand our own minds. Also, if brains are like computers, knowing how much computation it takes them to do what they do can help us predict when machines will match minds.

Indeed, there’s already a productive flow of knowledge between the fields.

Deep learning, a powerful form of artificial intelligence, for example, is loosely modeled on the brain’s vast, layered networks of neurons.

You can think of each “node” in a deep neural network as an artificial neuron. Like neurons, nodes receive signals from other nodes connected to them and perform mathematical operations to transform input into output.

Depending on the signals a node receives, it may opt to send its own signal to all the nodes in its network. In this way, signals cascade through layer upon layer of nodes, progressively tuning and sharpening the algorithm.

The brain works like this too. But the keyword above is loosely.

Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex.

In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI.

“[The result] forms a bridge from biological neurons to artificial neurons,” Andreas Tolias, a computational neuroscientist at Baylor College of Medicine, told Quanta last week.

Amazing Brains
Neurons are the cells that make up our brains. There are many different types of neurons, but generally, they have three parts: spindly, branching structures called dendrites, a cell body, and a root-like axon.

On one end, dendrites connect to a network of other neurons at junctures called synapses. At the other end, the axon forms synapses with a different population of neurons. Each cell receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes).

To computationally compare biological and artificial neurons, the team asked: How big of an artificial neural network would it take to simulate the behavior of a single biological neuron?

First, they built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex). The model used some 10,000 differential equations to simulate how and when the neuron would translate a series of input signals into a spike of its own.

They then fed inputs into their simulated neuron, recorded the outputs, and trained deep learning algorithms on all the data. Their goal? Find the algorithm that could most accurately approximate the model.

(Video: A model of a pyramidal neuron (left) receives signals through its dendritic branches. In this case, the signals provoke three spikes.)

They increased the number of layers in the algorithm until it was 99 percent accurate at predicting the simulated neuron’s output given a set of inputs. The sweet spot was at least five layers but no more than eight, or around 1,000 artificial neurons per biological neuron. The deep learning algorithm was much simpler than the original model—but still quite complex.

From where does this complexity arise?

As it turns out, it’s mostly due to a type of chemical receptor in dendrites—the NMDA ion channel—and the branching of dendrites in space. “Take away one of those things, and a neuron turns [into] a simple device,” lead author David Beniaguev tweeted in 2019, describing an earlier version of the work published as a preprint.

Indeed, after removing these features, the team found they could match the simplified biological model with but a single-layer deep learning algorithm.

A Moving Benchmark
It’s tempting to extrapolate the team’s results to estimate the computational complexity of the whole brain. But we’re nowhere near such a measure.

For one, it’s possible the team didn’t find the most efficient algorithm.

It’s common for the the developer community to rapidly improve upon the first version of an advanced deep learning algorithm. Given the intensive iteration in the study, the team is confident in the results, but they also released the model, data, and algorithm to the scientific community to see if anyone could do better.

Also, the model neuron is from a rat’s brain, as opposed to a human’s, and it’s only one type of brain cell. Further, the study is comparing a model to a model—there is, as of yet, no way to make a direct comparison to a physical neuron in the brain. It’s entirely possible the real thing is more, not less, complex.

Still, the team believes their work can push neuroscience and AI forward.

In the former case, the study is further evidence dendrites are complicated critters worthy of more attention. In the latter, it may lead to radical new algorithmic architectures.

Idan Segev, a coauthor on the paper, suggests engineers should try replacing the simple artificial neurons in today’s algorithms with a mini five-layer network simulating a biological neuron. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” Segev said.

Whether so much added complexity would pay off is uncertain. Experts debate how much of the brain’s detail algorithms need to capture to achieve similar or better results.

But it’s hard to argue with millions of years of evolutionary experimentation. So far, following the brain’s blueprint has been a rewarding strategy. And if this work is any indication, future neural networks may well dwarf today’s in size and complexity.

Image Credit: NICHD/S. Jeong Continue reading

Posted in Human Robots

#439400 A Neuron’s Sense of Timing Encodes ...

We like to think of brains as computers: A physical system that processes inputs and spits out outputs. But, obviously, what’s between your ears bears little resemblance to your laptop.

Computer scientists know the intimate details of how computers store and process information because they design and build them. But neuroscientists didn’t build brains, which makes them a bit like a piece of alien technology they’ve found and are trying to reverse engineer.

At this point, researchers have catalogued the components fairly well. We know the brain is a vast and intricate network of cells called neurons that communicate by way of electrical and chemical signals. What’s harder to figure out is how this network makes sense of the world.

To do that, scientists try to tie behavior to activity in the brain by listening to the chatter of its neurons firing. If neurons in a region get rowdy when a person is eating chocolate, well, those cells might be processing taste or directing chewing. This method has mostly focused on the frequency at which neurons fire—that is, how often they fire in a given period of time.

But frequency alone is an imprecise measure. For years, research in rats has suggested that when neurons fire relative to their peers—during navigation of spaces in particular—may also encode information. This process, in which the timing of some neurons grows increasingly out of step with their neighbors, is called “phase precession.”

It wasn’t known if phase precession was widespread in mammals, but recent studies have found it in bats and marmosets. And now, a new study has shown that it happens in humans too, strengthening the case that phase precession may occur across species.

The new study also found evidence of phase precession outside of spatial tasks, lending some weight to the idea it may be a more general process in learning throughout the brain.

The paper was published in the journal Cell last month by a Columbia University team of researchers led by neuroscientist and biomedical engineer Josh Jacobs.

The researchers say more studies are needed to flesh out the role of phase precession in the brain, and how or if it contributes to learning is still uncertain.

But to Salman Qasim, a post-doctoral fellow on Jacobs’ team and lead author of the paper, the patterns are tantalizing. “[Phase precession is] so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” he told Quanta Magazine this month.

Rat Brains to Human Brains
Though phase precession in rats has been studied for decades, it’s taken longer to unearth it in humans for a couple reasons. For one, it’s more challenging to study in humans at the level of neurons because it requires placing electrodes deep in the brain. Also, our patterns of brain activity are subtler and more complex, making them harder to untangle.

To solve the first challenge, the team analyzed decade-old recordings of neural chatter from 13 patients with drug-resistant epilepsy. As a part of their treatment, the patients had electrodes implanted to map the storms of activity during a seizure.

In one test, they navigated a two-dimensional virtual world—like a simple video game—on a laptop. Their brain activity was recorded as they were instructed to drive and drop off “passengers” at six stores around the perimeter of a rectangular track.

The team combed through this activity for hints of phase precession.

Active regions of the brain tend to fire together at a steady rate. These rhythms, called brain waves, are like a metronome or internal clock. Phase precession occurs when individual neurons fall out of step with the prevailing brain waves nearby. In navigation of spaces, like in this study, a particular type of neuron, called a “place cell,” fires earlier and earlier compared to its peers as the subject approaches and passes through a region. Its early firing eventually links up with the late firing of the next place cell in the chain, strengthening the synapse between the two and encoding a path through space.

In rats, theta waves in the hippocampus, which is a region associated with navigation, are strong and clear, making precession easier to pick out. In humans, they’re weaker and more variable. So the team used a clever statistical analysis to widen the observed wave frequencies into a range. And that’s when the phase precession clearly stood out.

This result lined up with prior navigation studies in rats. But the team went a step further.

In another part of the brain, the frontal cortex, they found phase precession in neurons not involved in navigation. The timing of these cells fell out of step with their neighbors as the subject achieved the goal of dropping passengers off at one of the stores. This indicated phase precession may also encode the sequence of steps leading up to a goal.

The findings, therefore, extend the occurrence of phase precession to humans and to new tasks and regions in the brain. The researchers say this suggests the phenomenon may be a general mechanism that encodes experiences over time. Indeed, other research—some very recent and not yet peer-reviewed—validates this idea, tying it to the processing of sounds, smells, and series of images.

And, the cherry on top, the process compresses experience to the length of a single brain wave. That is, an experience that takes seconds—say, a rat moving through several locations in the real world—is compressed to the fraction of a second it takes the associated neurons to fire in sequence.

In theory, this could help explain how we learn so fast from so few examples. Something artificial intelligence algorithms struggle to do.

As enticing as the research is, however, both the team involved in the study and other researchers say it’s still too early to draw definitive conclusions. There are other theories for how humans learn so quickly, and it’s possible phase precession is an artifact of the way the brain functions as opposed to a driver of its information processing.

That said, the results justify more serious investigation.

“Anyone who looks at brain activity as much as we do knows that it’s often a chaotic, stochastic mess,” Qasim told Wired last month. “So when you see some order emerge in that chaos, you want to ascribe to it some sort of functional purpose.”

Only time will tell if that order is a fundamental neural algorithm or something else.

Image Credit: Daniele Franchi / Unsplash Continue reading

Posted in Human Robots

#437957 Meet Assembloids, Mini Human Brains With ...

It’s not often that a twitching, snowman-shaped blob of 3D human tissue makes someone’s day.

But when Dr. Sergiu Pasca at Stanford University witnessed the tiny movement, he knew his lab had achieved something special. You see, the blob was evolved from three lab-grown chunks of human tissue: a mini-brain, mini-spinal cord, and mini-muscle. Each individual component, churned to eerie humanoid perfection inside bubbling incubators, is already a work of scientific genius. But Pasca took the extra step, marinating the three components together inside a soup of nutrients.

The result was a bizarre, Lego-like human tissue that replicates the basic circuits behind how we decide to move. Without external prompting, when churned together like ice cream, the three ingredients physically linked up into a fully functional circuit. The 3D mini-brain, through the information highway formed by the artificial spinal cord, was able to make the lab-grown muscle twitch on demand.

In other words, if you think isolated mini-brains—known formally as brain organoids—floating in a jar is creepy, upgrade your nightmares. The next big thing in probing the brain is assembloids—free-floating brain circuits—that now combine brain tissue with an external output.

The end goal isn’t to freak people out. Rather, it’s to recapitulate our nervous system, from input to output, inside the controlled environment of a Petri dish. An autonomous, living brain-spinal cord-muscle entity is an invaluable model for figuring out how our own brains direct the intricate muscle movements that allow us stay upright, walk, or type on a keyboard.

It’s the nexus toward more dexterous brain-machine interfaces, and a model to understand when brain-muscle connections fail—as in devastating conditions like Lou Gehrig’s disease or Parkinson’s, where people slowly lose muscle control due to the gradual death of neurons that control muscle function. Assembloids are a sort of “mini-me,” a workaround for testing potential treatments on a simple “replica” of a person rather than directly on a human.

From Organoids to Assembloids
The miniature snippet of the human nervous system has been a long time in the making.

It all started in 2014, when Dr. Madeleine Lancaster, then a post-doc at Stanford, grew a shockingly intricate 3D replica of human brain tissue inside a whirling incubator. Revolutionarily different than standard cell cultures, which grind up brain tissue to reconstruct as a flat network of cells, Lancaster’s 3D brain organoids were incredibly sophisticated in their recapitulation of the human brain during development. Subsequent studies further solidified their similarity to the developing brain of a fetus—not just in terms of neuron types, but also their connections and structure.

With the finding that these mini-brains sparked with electrical activity, bioethicists increasingly raised red flags that the blobs of human brain tissue—no larger than the size of a pea at most—could harbor the potential to develop a sense of awareness if further matured and with external input and output.

Despite these concerns, brain organoids became an instant hit. Because they’re made of human tissue—often taken from actual human patients and converted into stem-cell-like states—organoids harbor the same genetic makeup as their donors. This makes it possible to study perplexing conditions such as autism, schizophrenia, or other brain disorders in a dish. What’s more, because they’re grown in the lab, it’s possible to genetically edit the mini-brains to test potential genetic culprits in the search for a cure.

Yet mini-brains had an Achilles’ heel: not all were made the same. Rather, depending on the region of the brain that was reverse engineered, the cells had to be persuaded by different cocktails of chemical soups and maintained in isolation. It was a stark contrast to our own developing brains, where regions are connected through highways of neural networks and work in tandem.

Pasca faced the problem head-on. Betting on the brain’s self-assembling capacity, his team hypothesized that it might be possible to grow different mini-brains, each reflecting a different brain region, and have them fuse together into a synchronized band of neuron circuits to process information. Last year, his idea paid off.

In one mind-blowing study, his team grew two separate portions of the brain into blobs, one representing the cortex, the other a deeper part of the brain known to control reward and movement, called the striatum. Shockingly, when put together, the two blobs of human brain tissue fused into a functional couple, automatically establishing neural highways that resulted in one of the most sophisticated recapitulations of a human brain. Pasca crowned this tissue engineering crème-de-la-crème “assembloids,” a portmanteau between “assemble” and “organoids.”

“We have demonstrated that regionalized brain spheroids can be put together to form fused structures called brain assembloids,” said Pasca at the time.” [They] can then be used to investigate developmental processes that were previously inaccessible.”

And if that’s possible for wiring up a lab-grown brain, why wouldn’t it work for larger neural circuits?

Assembloids, Assemble
The new study is the fruition of that idea.

The team started with human skin cells, scraped off of eight healthy people, and transformed them into a stem-cell-like state, called iPSCs. These cells have long been touted as the breakthrough for personalized medical treatment, before each reflects the genetic makeup of its original host.

Using two separate cocktails, the team then generated mini-brains and mini-spinal cords using these iPSCs. The two components were placed together “in close proximity” for three days inside a lab incubator, gently floating around each other in an intricate dance. To the team’s surprise, under the microscope using tracers that glow in the dark, they saw highways of branches extending from one organoid to the other like arms in a tight embrace. When stimulated with electricity, the links fired up, suggesting that the connections weren’t just for show—they’re capable of transmitting information.

“We made the parts,” said Pasca, “but they knew how to put themselves together.”

Then came the ménage à trois. Once the mini-brain and spinal cord formed their double-decker ice cream scoop, the team overlaid them onto a layer of muscle cells—cultured separately into a human-like muscular structure. The end result was a somewhat bizarre and silly-looking snowman, made of three oddly-shaped spherical balls.

Yet against all odds, the brain-spinal cord assembly reached out to the lab-grown muscle. Using a variety of tools, including measuring muscle contraction, the team found that this utterly Frankenstein-like snowman was able to make the muscle component contract—in a way similar to how our muscles twitch when needed.

“Skeletal muscle doesn’t usually contract on its own,” said Pasca. “Seeing that first twitch in a lab dish immediately after cortical stimulation is something that’s not soon forgotten.”

When tested for longevity, the contraption lasted for up to 10 weeks without any sort of breakdown. Far from a one-shot wonder, the isolated circuit worked even better the longer each component was connected.

Pasca isn’t the first to give mini-brains an output channel. Last year, the queen of brain organoids, Lancaster, chopped up mature mini-brains into slices, which were then linked to muscle tissue through a cultured spinal cord. Assembloids are a step up, showing that it’s possible to automatically sew multiple nerve-linked structures together, such as brain and muscle, sans slicing.

The question is what happens when these assembloids become more sophisticated, edging ever closer to the inherent wiring that powers our movements. Pasca’s study targets outputs, but what about inputs? Can we wire input channels, such as retinal cells, to mini-brains that have a rudimentary visual cortex to process those examples? Learning, after all, depends on examples of our world, which are processed inside computational circuits and delivered as outputs—potentially, muscle contractions.

To be clear, few would argue that today’s mini-brains are capable of any sort of consciousness or awareness. But as mini-brains get increasingly more sophisticated, at what point can we consider them a sort of AI, capable of computation or even something that mimics thought? We don’t yet have an answer—but the debates are on.

Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#437575 AI-Directed Robotic Hand Learns How to ...

Reaching for a nearby object seems like a mindless task, but the action requires a sophisticated neural network that took humans millions of years to evolve. Now, robots are acquiring that same ability using artificial neural networks. In a recent study, a robotic hand “learns” to pick up objects of different shapes and hardness using three different grasping motions.

The key to this development is something called a spiking neuron. Like real neurons in the brain, artificial neurons in a spiking neural network (SNN) fire together to encode and process temporal information. Researchers study SNNs because this approach may yield insights into how biological neural networks function, including our own.

“The programming of humanoid or bio-inspired robots is complex,” says Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany. “And classical robotics programming methods are not always suitable to take advantage of their capabilities.”

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a SNN, first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time.

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.

The researchers incorporated a SNN into their system, which is divided into several sub-networks. One sub-network controls each finger individually, either flexing or extending the finger. Another concerns each type of grasping movement, for example whether the robotic hand will need to do a pinching, spherical or cylindrical movement.

For each finger, a neural circuit detects contact with an object using the currents of the motors and the velocity of the joints. When contact with an object is detected, a controller is activated to regulate how much force the finger exerts.

“This way, the movements of generic grasping motions are adapted to objects with different shapes, stiffness and sizes,” says Tieck. The system can also adapt its grasping motion quickly if the object moves or deforms.

The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions.

“For this approach, the next step is to incorporate visual information from event-based cameras and integrate arm motion with SNNs,” says Tieck. “Additionally, we would like to extend the hand with haptic sensors.”

The long-term goal, he says, is to develop “a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback.” Continue reading

Posted in Human Robots

#437543 This Is How We’ll Engineer Artificial ...

Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.”

Answer: “What is the human hand?”

Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel.

So why can’t robots do the same?

In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicon.

Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand.

Here’s what Sundaram laid out to get us to that future.

How Does Touch Work, Anyway?
Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain.

The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.”

At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in.

The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance.

It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different.

In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object.

Biology to Machine
Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia engineered a “feeling” robotic finger using overlapping light emitters and sensors in a way loosely similar to receptor fields. Distortions in light were then analyzed with deep learning to translate into contact location and force.

Although a radical departure from our own electrical-based system, the Columbia team’s attempt was clearly based on human biology. They’re not alone. “Substantial progress is being made in the creation of soft, stretchable electronic skins,” said Sundaram, many of which can sense forces or pressure, although they’re currently still limited.

What’s promising, however, is the “exciting progress in using visual data,” said Sundaram. Computer vision has gained enormously from ubiquitous cameras and large datasets, making it possible to train powerful but data-hungry algorithms such as deep convolutional neural networks (CNNs).

By piggybacking on their success, we can essentially add “eyes” to robotic hands, a superpower us humans can’t imagine. Even better, CNNs and other classes of algorithms can be readily adopted for processing tactile data. Together, a robotic hand could use its eyes to scan an object, plan its movements for grasp, and use touch for feedback to adjust its grip. Maybe we’ll finally have a robot that easily rescues the phone sadly dropped into a composting toilet. Or something much grander to benefit humanity.

That said, relying too heavily on vision could also be a downfall. Take a robot that scans a wide area of rubble for signs of life during a disaster response. If touch relies on sight, then it would have to keep a continuous line-of-sight in a complex and dynamic setting—something computer vision doesn’t do well in, at least for now.

A Neuromorphic Way Forward
Too Debbie Downer? I got your back! It’s hard to overstate the challenges, but what’s clear is that emerging machine learning tools can tackle data processing challenges. For vision, it’s distilling complex images into “actionable control policies,” said Sundaram. For touch, it’s easy to imagine the same. Couple the two together, and that’s a robotic super-hand in the making.

Going forward, argues Sundaram, we need to closely adhere to how the hand and brain process touch. Hijacking our biological “touch machinery” has already proved useful. In 2019, one team used a nerve-machine interface for amputees to control a robotic arm—the DEKA LUKE arm—and sense what the limb and attached hand were feeling. Pressure on the LUKE arm and hand activated an implanted neural interface, which zapped remaining nerves in a way that the brain processes as touch. When the AI analyzed pressure data similar to biological tactile neurons, the person was able to better identify different objects with their eyes closed.

“Neuromorphic tactile hardware (and software) advances will strongly influence the future of bionic prostheses—a compelling application of robotic hands,” said Sundaram, adding that the next step is to increase the density of sensors.

Two additional themes made the list of progressing towards a cyborg future. One is longevity, in that sensors on a robot need to be able to reliably produce large quantities of high-quality data—something that’s seemingly mundane, but is a practical limitation.

The other is going all-in-one. Rather than just a pressure sensor, we need something that captures the myriad of touch sensations. From feather-light to a heavy punch, from vibrations to temperatures, a tree-like architecture similar to our hands would help organize, integrate, and otherwise process data collected from those sensors.

Just a decade ago, mind-controlled robotics were considered a blue sky, stretch-goal neurotechnological fantasy. We now have a chance to “close the loop,” from thought to movement to touch and back to thought, and make some badass robots along the way.

Image Credit: PublicDomainPictures from Pixabay Continue reading

Posted in Human Robots