Tag Archives: crazy

#435752 T-RHex Is a Hexapod Robot With ...

In Aaron Johnson’s “Robot Design & Experimentation” class at CMU, teams of students have a semester to design and build an experimental robotic system based on a theme. For spring 2019, that theme was “Bioinspired Robotics,” which is definitely one of our favorite kinds of robotics—animals can do all kinds of crazy things, and it’s always a lot of fun watching robots try to match them. They almost never succeed, of course, but even basic imitation can lead to robots with some unique capabilities.

One of the projects from this year’s course, from Team ScienceParrot, is a new version of RHex called T-RHex (pronounced T-Rex, like the dinosaur). T-RHex comes with a tail, but more importantly, it has tiny tapered toes, which help it grip onto rough surfaces like bricks, wood, and concrete. It’s able to climb its way up very steep slopes, and hang from them, relying on its toes to keep itself from falling off.

T-RHex’s toes are called microspines, and we’ve seen them in all kinds of robots. The most famous of these is probably JPL’s LEMUR IIB (which wins on sheer microspine volume), although the concept goes back at least 15 years to Stanford’s SpinyBot. Robots that use microspines to climb tend to be fairly methodical at it, since the microspines have to be engaged and disengaged with care, limiting their non-climbing mobility.

T-RHex manages to perform many of the same sorts of climbing and hanging maneuvers without losing RHex’s ability for quick, efficient wheel-leg (wheg) locomotion.

If you look closely at T-RHex walking in the video, you’ll notice that in its normal forward gait, it’s sort of walking on its ankles, rather than its toes. This means that the microspines aren’t engaged most of the time, so that the robot can use its regular wheg motion to get around. To engage the microspines, the robot moves its whegs backwards, meaning that its tail is arguably coming out of its head. But since all of T-RHex’s capability is mechanical in nature and it has no active sensors, it doesn’t really need a head, so that’s fine.

The highest climbable slope that T-RHex could manage was 55 degrees, meaning that it can’t, yet, conquer vertical walls. The researchers were most surprised by the robot’s ability to cling to surfaces, where it was perfectly happy to hang out on a slope of 135 degrees, which is a 45 degree overhang (!). I have no idea how it would ever reach that kind of position on its own, but it’s nice to know that if it ever does, its spines will keep doing their job.

Photo: CMU

T-RHex uses laser-cut acrylic legs, with the microspines embedded into 3D-printed toes. The tail is needed to prevent the robot from tipping backward.

For more details about the project, we spoke with Team ScienceParrot member (and CMU PhD student) Catherine Pavlov via email.

IEEE Spectrum: We’re used to seeing RHex with compliant, springy legs—how do the new legs affect T-RHex’s mobility?

Catherine Pavlov: There’s some compliance in the legs, though not as much as RHex—this is driven by the use of acrylic, which was chosen for budget/manufacturing reasons. Matching the compliance of RHex with acrylic would have made the tines too weak (since often only a few hold the load of the robot during climbing). It definitely means you can’t use energy storage in the legs the way RHex does, for example when pronking. T-RHex is probably more limited by motor speed in terms of mobility though. We were using some borrowed Dynamixels that didn’t allow for good positioning at high speeds.

How did you design the climbing gait? Why not use the middle legs, and why is the tail necessary?

The gait was a lot of hand-tuning and trial-and-error. We wanted a left/right symmetric gait to enable load sharing among more spines and prevent out-of-plane twisting of the legs. When using all three pairs, you have to have very accurate angular positioning or one leg pair gets pushed off the wall. Since two legs should be able to hold the full robot gait, using the middle legs was hurting more than it was helping, with the middle legs sometimes pushing the rear ones off of the wall.

The tail is needed to prevent the robot from tipping backward and “sitting” on the wall. During static testing we saw the robot tip backward, disengaging the front legs, at around 35 degrees incline. The tail allows us to load the front legs, even when they’re at a shallow angle to the surface. The climbing gait we designed uses the tail to allow the rear legs to fully recirculate without the robot tipping backward.

Photo: CMU

Team ScienceParrot with T-RHex.

What prevents T-RHex from climbing even steeper surfaces?

There are a few limiting factors. One is that the tines of the legs break pretty easily. I think we also need a lighter platform to get fully vertical—we’re going to look at MiniRHex for future work. We’re also not convinced our gait is the best it can be, we can probably get marginal improvements with more tuning, which might be enough.

Can the microspines assist with more dynamic maneuvers?

Dynamic climbing maneuvers? I think that would only be possible on surfaces with very good surface adhesion and very good surface strength, but it’s certainly theoretically possible. The current instance of T-RHex would definitely break if you tried to wall jump though.

What are you working on next?

Our main target is exploring the space of materials for leg fabrication, such as fiberglass, PLA, urethanes, and maybe metallic glass. We think there’s a lot of room for improvement in the leg material and geometry. We’d also like to see MiniRHex equipped with microspines, which will require legs about half the scale of what we built for T-RHex. Longer-term improvements would be the addition of sensors e.g. for wall detection, and a reliable floor-to-wall transition and dynamic gait transitions.

[ T-RHex ] Continue reading

Posted in Human Robots

#435722 Stochastic Robots Use Randomness to ...

The idea behind swarm robots is to replace discrete, expensive, breakable uni-tasking components with a whole bunch of much simpler, cheaper, and replaceable robots that can work together to do the same sorts of tasks. Unfortunately, all of those swarm robots end up needing their own computing and communications and stuff if you want to get them to do what you want them to do.

A different approach to swarm robotics is to use a swarm of much cheaper robots that are far less intelligent. In fact, they may not have to be intelligent at all, if you can rely on their physical characteristics to drive them instead. These swarms are “stochastic,” meaning that their motions are randomly determined, but if you’re clever and careful, you can still get them to do specific things.

Georgia Tech has developed some little swarm robots called “smarticles” that can’t really do much at all on their own, but once you put them together into a jumble, their randomness can actually accomplish something.

Honestly, calling these particle robots “smart” might be giving them a bit too much credit, because they’re actually kind of dumb and strictly speaking not capable of all that much on their own. A single smarticle weighs 35 grams, and consists of some little 3D-printed flappy bits attached to servos, plus an Arduino Pro Mini, a battery, and a light or sound sensor. When its little flappy bits are activated, each smarticle can move slightly, but a single one mostly just moves around in a square and then will gradually drift in a mostly random direction over time.

It gets more interesting when you throw a whole bunch of smarticles into a constrained area. A small collection of five or 10 smarticles constrained together form a “supersmarticle,” but besides being in close proximity to one another, the smarticles within the supersmarticle aren’t communicating or anything like that. As far as each smarticle is concerned, they’re independent, but weirdly, a bumble of them can work together without working together.

“These are very rudimentary robots whose behavior is dominated by mechanics and the laws of physics,” said Dan Goldman, a Dunn Family Professor in the School of Physics at the Georgia Institute of Technology.

The researchers noticed that if one small robot stopped moving, perhaps because its battery died, the group of smarticles would begin moving in the direction of that stalled robot. Graduate student Ross Warkentin learned he could control the movement by adding photo sensors to the robots that halt the arm flapping when a strong beam of light hits one of them.

“If you angle the flashlight just right, you can highlight the robot you want to be inactive, and that causes the ring to lurch toward or away from it, even though no robots are programmed to move toward the light,” Goldman said. “That allowed steering of the ensemble in a very rudimentary, stochastic way.”

It turns out that it’s possible to model this behavior, and control a supersmarticle with enough fidelity to steer it through a maze. And while these particular smarticles aren’t all that small, strictly speaking, the idea is to develop techniques that will work when robots are scaled way way down to the point where you can't physically fit useful computing in there at all.

The researchers are also working on some other concepts, like these:

Image: Science Robotics

The Georgia Tech researchers envision stochastic robot swarms that don’t have a perfectly defined shape or delineation but are capable of self-propulsion, relying on the ensemble-level behaviors that lead to collective locomotion. In such a robot, the researchers say, groups of largely generic agents may be able to achieve complex goals, as observed in biological collectives.

Er, yeah. I’m…not sure I really want there to be a bipedal humanoid robot built out of a bunch of tiny robots. Like, that seems creepy somehow, you know? I’m totally okay with slugs, but let’s not get crazy.

“A robot made of robots: Emergent transport and control of a smarticle ensemble, by William Savoie, Thomas A. Berrueta, Zachary Jackson, Ana Pervan, Ross Warkentin, Shengkai Li, Todd D. Murphey, Kurt Wiesenfeld, and Daniel I. Goldman” from the Georgia Institute of Technology, appears in the current issue of Science Robotics. Continue reading

Posted in Human Robots

#435676 Intel’s Neuromorphic System Hits 8 ...

At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to unveil an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are built with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.

Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.

“We’re quickly accumulating results and data that there are definite benefits… mostly in the domain of efficiency. Virtually every one that we benchmark…we find significant gains in this architecture,” he says.

Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.”

Photo: Tim Herman/Intel Corporation

One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips.

Finding algorithms that run well on an 8-million-neuron system and optimizing those algorithms in software is a considerable effort, he says. Still, the payoff could be huge. Neural networks that are more brain-like, such as Loihi, could be immune to some of the artificial intelligence’s—for lack of a better word—dumbness.

For example, today’s neural networks suffer from something called catastrophic forgetting. If you tried to teach a trained neural network to recognize something new—a new road sign, say—by simply exposing the network to the new input, it would disrupt the network so badly that it would become terrible at recognizing anything. To avoid this, you have to completely retrain the network from the ground up. (DARPA’s Lifelong Learning, or L2M, program is dedicated to solving this problem.)

(Here’s my favorite analogy: Say you coached a basketball team, and you raised the net by 30 centimeters while nobody was looking. The players would miss a bunch at first, but they’d figure things out quickly. If those players were like today’s neural networks, you’d have to pull them off the court and teach them the entire game over again—dribbling, passing, everything.)

Loihi can run networks that might be immune to catastrophic forgetting, meaning it learns a bit more like a human. In fact, there’s evidence through a research collaboration with Thomas Cleland’s group at Cornell University, that Loihi can achieve what’s called one-shot learning. That is, learning a new feature after being exposed to it only once. The Cornell group showed this by abstracting a model of the olfactory system so that it would run on Loihi. When exposed to a new virtual scent, the system not only didn't catastrophically forget everything else it had smelled, it learned to recognize the new scent just from the single exposure.

Loihi might also be able to run feature-extraction algorithms that are immune to the kinds of adversarial attacks that befuddle today’s image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. (Disturbingly, humans are not completely immune to such attacks.)

Photo: Tim Herman/Intel Corporation

A close-up shot of Loihi, Intel’s neuromorphic research chip. Intel’s latest neuromorphic system, Pohoiki Beach, will be comprised of 64 of these Loihi chips.

Researchers have also been using Loihi to improve real-time control for robotic systems. For example, last week at the Telluride Neuromorphic Cognition Engineering Workshop—an event Davies called “summer camp for neuromorphics nerds”—researchers were hard at work using a Loihi-based system to control a foosball table. “It strikes people as crazy,” he says. “But it’s a nice illustration of neuromorphic technology. It’s fast, requires quick response, quick planning, and anticipation. These are what neuromorphic chips are good at.” Continue reading

Posted in Human Robots

#435520 These Are the Meta-Trends Shaping the ...

Life is pretty different now than it was 20 years ago, or even 10 years ago. It’s sort of exciting, and sort of scary. And hold onto your hat, because it’s going to keep changing—even faster than it already has been.

The good news is, maybe there won’t be too many big surprises, because the future will be shaped by trends that have already been set in motion. According to Singularity University co-founder and XPRIZE founder Peter Diamandis, a lot of these trends are unstoppable—but they’re also pretty predictable.

At SU’s Global Summit, taking place this week in San Francisco, Diamandis outlined some of the meta-trends he believes are key to how we’ll live our lives and do business in the (not too distant) future.

Increasing Global Abundance
Resources are becoming more abundant all over the world, and fewer people are seeing their lives limited by scarcity. “It’s hard for us to realize this as we see crisis news, but what people have access to is more abundant than ever before,” Diamandis said. Products and services are becoming cheaper and thus available to more people, and having more resources then enables people to create more, thus producing even more resources—and so on.

Need evidence? The proportion of the world’s population living in extreme poverty is currently lower than it’s ever been. The average human life expectancy is longer than it’s ever been. The costs of day-to-day needs like food, energy, transportation, and communications are on a downward trend.

Take energy. In most of the world, though its costs are decreasing, it’s still a fairly precious commodity; we turn off our lights and our air conditioners when we don’t need them (ideally, both to save money and to avoid wastefulness). But the cost of solar energy has plummeted, and the storage capacity of batteries is improving, and solar technology is steadily getting more efficient. Bids for new solar power plants in the past few years have broken each other’s records for lowest cost per kilowatt hour.

“We’re not far from a penny per kilowatt hour for energy from the sun,” Diamandis said. “And if you’ve got energy, you’ve got water.” Desalination, for one, will be much more widely feasible once the cost of the energy needed for it drops.

Knowledge is perhaps the most crucial resource that’s going from scarce to abundant. All the world’s knowledge is now at the fingertips of anyone who has a mobile phone and an internet connection—and the number of people connected is only going to grow. “Everyone is being connected at gigabit connection speeds, and this will be transformative,” Diamandis said. “We’re heading towards a world where anyone can know anything at any time.”

Increasing Capital Abundance
It’s not just goods, services, and knowledge that are becoming more plentiful. Money is, too—particularly money for business. “There’s more and more capital available to invest in companies,” Diamandis said. As a result, more people are getting the chance to bring their world-changing ideas to life.

Venture capital investments reached a new record of $130 billion in 2018, up from $84 billion in 2017—and that’s just in the US. Globally, VC funding grew 21 percent from 2017 to a total of $207 billion in 2018.

Through crowdfunding, any person in any part of the world can present their idea and ask for funding. That funding can come in the form of a loan, an equity investment, a reward, or an advanced purchase of the proposed product or service. “Crowdfunding means it doesn’t matter where you live, if you have a great idea you can get it funded by people from all over the world,” Diamandis said.

All this is making a difference; the number of unicorns—privately-held startups valued at over $1 billion—currently stands at an astounding 360.

One of the reasons why the world is getting better, Diamandis believes, is because entrepreneurs are trying more crazy ideas—not ideas that are reasonable or predictable or linear, but ideas that seem absurd at first, then eventually end up changing the world.

Everyone and Everything, Connected
As already noted, knowledge is becoming abundant thanks to the proliferation of mobile phones and wireless internet; everyone’s getting connected. In the next decade or sooner, connectivity will reach every person in the world. 5G is being tested and offered for the first time this year, and companies like Google, SpaceX, OneWeb, and Amazon are racing to develop global satellite internet constellations, whether by launching 12,000 satellites, as SpaceX’s Starlink is doing, or by floating giant balloons into the stratosphere like Google’s Project Loon.

“We’re about to reach a period of time in the next four to six years where we’re going from half the world’s people being connected to the whole world being connected,” Diamandis said. “What happens when 4.2 billion new minds come online? They’re all going to want to create, discover, consume, and invent.”

And it doesn’t stop at connecting people. Things are becoming more connected too. “By 2020 there will be over 20 billion connected devices and more than one trillion sensors,” Diamandis said. By 2030, those projections go up to 500 billion and 100 trillion. Think about it: there’s home devices like refrigerators, TVs, dishwashers, digital assistants, and even toasters. There’s city infrastructure, from stoplights to cameras to public transportation like buses or bike sharing. It’s all getting smart and connected.

Soon we’ll be adding autonomous cars to the mix, and an unimaginable glut of data to go with them. Every turn, every stop, every acceleration will be a data point. Some cars already collect over 25 gigabytes of data per hour, Diamandis said, and car data is projected to generate $750 billion of revenue by 2030.

“You’re going to start asking questions that were never askable before, because the data is now there to be mined,” he said.

Increasing Human Intelligence
Indeed, we’ll have data on everything we could possibly want data on. We’ll also soon have what Diamandis calls just-in-time education, where 5G combined with artificial intelligence and augmented reality will allow you to learn something in the moment you need it. “It’s not going and studying, it’s where your AR glasses show you how to do an emergency surgery, or fix something, or program something,” he said.

We’re also at the beginning of massive investments in research working towards connecting our brains to the cloud. “Right now, everything we think, feel, hear, or learn is confined in our synaptic connections,” Diamandis said. What will it look like when that’s no longer the case? Companies like Kernel, Neuralink, Open Water, Facebook, Google, and IBM are all investing billions of dollars into brain-machine interface research.

Increasing Human Longevity
One of the most important problems we’ll use our newfound intelligence to solve is that of our own health and mortality, making 100 years old the new 60—then eventually, 120 or 150.

“Our bodies were never evolved to live past age 30,” Diamandis said. “You’d go into puberty at age 13 and have a baby, and by the time you were 26 your baby was having a baby.”

Seeing how drastically our lifespans have changed over time makes you wonder what aging even is; is it natural, or is it a disease? Many companies are treating it as one, and using technologies like senolytics, CRISPR, and stem cell therapy to try to cure it. Scaffolds of human organs can now be 3D printed then populated with the recipient’s own stem cells so that their bodies won’t reject the transplant. Companies are testing small-molecule pharmaceuticals that can stop various forms of cancer.

“We don’t truly know what’s going on inside our bodies—but we can,” Diamandis said. “We’re going to be able to track our bodies and find disease at stage zero.”

Chins Up
The world is far from perfect—that’s not hard to see. What’s less obvious but just as true is that we’re living in an amazing time. More people are coming together, and they have more access to information, and that information moves faster, than ever before.

“I don’t think any of us understand how fast the world is changing,” Diamandis said. “Most people are fearful about the future. But we should be excited about the tools we now have to solve the world’s problems.”

Image Credit: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots