Tag Archives: field

#435308 Brain-Machine Interfaces Are Getting ...

Elon Musk grabbed a lot of attention with his July 16 announcement that his company Neuralink plans to implant electrodes into the brains of people with paralysis by next year. Their first goal is to create assistive technology to help people who can’t move or are unable to communicate.

If you haven’t been paying attention, brain-machine interfaces (BMIs) that allow people to control robotic arms with their thoughts might sound like science fiction. But science and engineering efforts have already turned it into reality.

In a few research labs around the world, scientists and physicians have been implanting devices into the brains of people who have lost the ability to control their arms or hands for over a decade. In our own research group at the University of Pittsburgh, we’ve enabled people with paralyzed arms and hands to control robotic arms that allow them to grasp and move objects with relative ease. They can even experience touch-like sensations from their own hand when the robot grasps objects.

At its core, a BMI is pretty straightforward. In your brain, microscopic cells called neurons are sending signals back and forth to each other all the time. Everything you think, do and feel as you interact with the world around you is the result of the activity of these 80 billion or so neurons.

If you implant a tiny wire very close to one of these neurons, you can record the electrical activity it generates and send it to a computer. Record enough of these signals from the right area of the brain and it becomes possible to control computers, robots, or anything else you might want, simply by thinking about moving. But doing this comes with tremendous technical challenges, especially if you want to record from hundreds or thousands of neurons.

What Neuralink Is Bringing to the Table
Elon Musk founded Neuralink in 2017, aiming to address these challenges and raise the bar for implanted neural interfaces.

Perhaps the most impressive aspect of Neuralink’s system is the breadth and depth of their approach. Building a BMI is inherently interdisciplinary, requiring expertise in electrode design and microfabrication, implantable materials, surgical methods, electronics, packaging, neuroscience, algorithms, medicine, regulatory issues, and more. Neuralink has created a team that spans most, if not all, of these areas.

With all of this expertise, Neuralink is undoubtedly moving the field forward, and improving their technology rapidly. Individually, many of the components of their system represent significant progress along predictable paths. For example, their electrodes, that they call threads, are very small and flexible; many researchers have tried to harness those properties to minimize the chance the brain’s immune response would reject the electrodes after insertion. Neuralink has also developed high-performance miniature electronics, another focus area for labs working on BMIs.

Often overlooked in academic settings, however, is how an entire system would be efficiently implanted in a brain.

Neuralink’s BMI requires brain surgery. This is because implanted electrodes that are in intimate contact with neurons will always outperform non-invasive electrodes where neurons are far away from the electrodes sitting outside the skull. So, a critical question becomes how to minimize the surgical challenges around getting the device into a brain.

Maybe the most impressive aspect of Neuralink’s announcement was that they created a 3,000-electrode neural interface where electrodes could be implanted at a rate of between 30 and 200 per minute. Each thread of electrodes is implanted by a sophisticated surgical robot that essentially acts like a sewing machine. This all happens while specifically avoiding blood vessels that blanket the surface of the brain. The robotics and imaging that enable this feat, with tight integration to the entire device, is striking.

Neuralink has thought through the challenge of developing a clinically viable BMI from beginning to end in a way that few groups have done, though they acknowledge that many challenges remain as they work towards getting this technology into human patients in the clinic.

Figuring Out What More Electrodes Gets You
The quest for implantable devices with thousands of electrodes is not only the domain of private companies. DARPA, the NIH BRAIN Initiative, and international consortiums are working on neurotechnologies for recording and stimulating in the brain with goals of tens of thousands of electrodes. But what might scientists do with the information from 1,000, 3,000, or maybe even 100,000 neurons?

At some level, devices with more electrodes might not actually be necessary to have a meaningful impact in people’s lives. Effective control of computers for access and communication, of robotic limbs to grasp and move objects as well as of paralyzed muscles is already happening—in people. And it has been for a number of years.

Since the 1990s, the Utah Array, which has just 100 electrodes and is manufactured by Blackrock Microsystems, has been a critical device in neuroscience and clinical research. This electrode array is FDA-cleared for temporary neural recording. Several research groups, including our own, have implanted Utah Arrays in people that lasted multiple years.

Currently, the biggest constraints are related to connectors, electronics, and system-level engineering, not the implanted electrode itself—although increasing the electrodes’ lifespan to more than five years would represent a significant advance. As those technical capabilities improve, it might turn out that the ability to accurately control computers and robots is limited more by scientists’ understanding of what the neurons are saying—that is, the neural code—than by the number of electrodes on the device.

Even the most capable implanted system, and maybe the most capable devices researchers can reasonably imagine, might fall short of the goal of actually augmenting skilled human performance. Nevertheless, Neuralink’s goal of creating better BMIs has the potential to improve the lives of people who can’t move or are unable to communicate. Right now, Musk’s vision of using BMIs to meld physical brains and intelligence with artificial ones is no more than a dream.

So, what does the future look like for Neuralink and other groups creating implantable BMIs? Devices with more electrodes that last longer and are connected to smaller and more powerful wireless electronics are essential. Better devices themselves, however, are insufficient. Continued public and private investment in companies and academic research labs, as well as innovative ways for these groups to work together to share technologies and data, will be necessary to truly advance scientists’ understanding of the brain and deliver on the promise of BMIs to improve peoples’ lives.

While researchers need to keep the future societal implications of advanced neurotechnologies in mind—there’s an essential role for ethicists and regulation—BMIs could be truly transformative as they help more people overcome limitations caused by injury or disease in the brain and body.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: UPMC/Pitt Health Sciences, / CC BY-NC-ND Continue reading

Posted in Human Robots

#435181 This Week’s Awesome Stories From ...

ROBOTICS
Inside the Amazon Warehouse Where Humans and Machines Become One
Matt Simon | Wired
“Seen from above, the scale of the system is dizzying. My robot, a little orange slab known as a ‘drive’ (or more formally and mythically, Pegasus), is just one of hundreds of its kind swarming a 125,000-square-foot ‘field’ pockmarked with chutes. It’s a symphony of electric whirring, with robots pausing for one another at intersections and delivering their packages to the slides.”

FUTURE OF WORK
Top Oxford Researcher Talks the Risk of Automation to Employment
Luke Dormehl | Digital Trends
“[Karl Benedict Frey’s] new book…compares the age of artificial intelligence to past shifts in the labor market, such as the Industrial Revolution. Frey spoke with Digital Trends about the impacts of automation, changing attitudes, and what—if anything—we can do about the coming robot takeover.”

AUTOMATION
Watch Amazon’s All-New Delivery Drone Zipping Through the Skies
Trevor Mogg | Digital Trends
“The autonomous electric-powered aircraft features six rotors and can take off like a helicopter and fly like a plane… Jeff Wilke, chief of the company’s global consumer business, said the drone can fly 15 miles and carry packages weighing up to 5 pounds, which, he said, covers most stuff ordered on Amazon.”

ARTIFICIAL INTELLIGENCE
This AI-Powered Subreddit Has Been Simulating the Real Thing For Years
Amrita Khalid | Engadget
“The bots comment on each other’s posts, and things can quickly get heated. Topics range from politics to food to relationships to completely nonsensical memes. While many of the posts are incomprehensible or nonsensical, it’s hard to argue that much of life on social media isn’t.”

COMPUTING
Overlooked No More: Alan Turing, Condemned Codebreaker and Computer Visionary
Alan Cowell | The New York Times
“To this day Turing is recognized in his own country and among a broad society of scientists as a pillar of achievement who had fused brilliance and eccentricity, had moved comfortably in the abstruse realms of mathematics and cryptography but awkwardly in social settings, and had been brought low by the hostile society into which he was born.”

GENETICS
Congress Is Debating—Again—Whether Genes Can Be Patented
Megan Molteni | Wired
“Under debate are the notions that natural phenomena, observations of laws of nature, and abstract ideas are unpatentable. …If successful, some worry this bill could carve up the world’s genetic resources into commercial fiefdoms, forcing scientists to perform basic research under constant threat of legal action.”

Image Credit: John Petalcurin / Unsplash Continue reading

Posted in Human Robots

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots

#435150 Fieldwork Robotics completes initial ...

University of Plymouth spinout company Fieldwork Robotics has completed initial field trials of its robot raspberry harvesting system. Continue reading

Posted in Human Robots

#435127 Teaching AI the Concept of ‘Similar, ...

As a human you instinctively know that a leopard is closer to a cat than a motorbike, but the way we train most AI makes them oblivious to these kinds of relations. Building the concept of similarity into our algorithms could make them far more capable, writes the author of a new paper in Science Robotics.

Convolutional neural networks have revolutionized the field of computer vision to the point that machines are now outperforming humans on some of the most challenging visual tasks. But the way we train them to analyze images is very different from the way humans learn, says Atsuto Maki, an associate professor at KTH Royal Institute of Technology.

“Imagine that you are two years old and being quizzed on what you see in a photo of a leopard,” he writes. “You might answer ‘a cat’ and your parents might say, ‘yeah, not quite but similar’.”

In contrast, the way we train neural networks rarely gives that kind of partial credit. They are typically trained to have very high confidence in the correct label and consider all incorrect labels, whether ”cat” or “motorbike,” equally wrong. That’s a mistake, says Maki, because ignoring the fact that something can be “less wrong” means you’re not exploiting all of the information in the training data.

Even when models are trained this way, there will be small differences in the probabilities assigned to incorrect labels that can tell you a lot about how well the model can generalize what it has learned to unseen data.

If you show a model a picture of a leopard and it gives “cat” a probability of five percent and “motorbike” one percent, that suggests it picked up on the fact that a cat is closer to a leopard than a motorbike. In contrast, if the figures are the other way around it means the model hasn’t learned the broad features that make cats and leopards similar, something that could potentially be helpful when analyzing new data.

If we could boost this ability to identify similarities between classes we should be able to create more flexible models better able to generalize, says Maki. And recent research has demonstrated how variations of an approach called regularization might help us achieve that goal.

Neural networks are prone to a problem called “overfitting,” which refers to a tendency to pay too much attention to tiny details and noise specific to their training set. When that happens, models will perform excellently on their training data but poorly when applied to unseen test data without these particular quirks.

Regularization is used to circumvent this problem, typically by reducing the network’s capacity to learn all this unnecessary information and therefore boost its ability to generalize to new data. Techniques are varied, but generally involve modifying the network’s structure or the strength of the weights between artificial neurons.

More recently, though, researchers have suggested new regularization approaches that work by encouraging a broader spread of probabilities across all classes. This essentially helps them capture more of the class similarities, says Maki, and therefore boosts their ability to generalize.

One such approach was devised in 2017 by Google Brain researchers, led by deep learning pioneer Geoffrey Hinton. They introduced a penalty to their training process that directly punished overconfident predictions in the model’s outputs, and a technique called label smoothing that prevents the largest probability becoming much larger than all others. This meant the probabilities were lower for correct labels and higher for incorrect ones, which was found to boost performance of models on varied tasks from image classification to speech recognition.

Another came from Maki himself in 2017 and achieves the same goal, but by suppressing high values in the model’s feature vector—the mathematical construct that describes all of an object’s important characteristics. This has a knock-on effect on the spread of output probabilities and also helped boost performance on various image classification tasks.

While it’s still early days for the approach, the fact that humans are able to exploit these kinds of similarities to learn more efficiently suggests that models that incorporate them hold promise. Maki points out that it could be particularly useful in applications such as robotic grasping, where distinguishing various similar objects is important.

Image Credit: Marianna Kalashnyk / Shutterstock.com Continue reading

Posted in Human Robots