Tag Archives: computers

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots

#435159 This Week’s Awesome Stories From ...

DeepMind Can Now Beat Us at Multiplayer Games Too
Cade Metz | The New York Times
“DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.”

Tiny Robots Carry Stem Cells Through a Mouse
Emily Waltz | IEEE Spectrum
“Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper, published [May 29] in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.” [Video shows microbots navigating a microfluidic chip. MRI could not be used to image the mouse as the bots navigate magnetically.]

How a Quantum Computer Could Break 2048-Bit RSA Encryption in 8 Hours
Emerging Technology From the arXiv | MIT Technology Review
“[Two researchers] have found a more efficient way for quantum computers to perform the code-breaking calculations, reducing the resources they require by orders of magnitude. Consequently, these machines are significantly closer to reality than anyone suspected.” [The arXiv is a pre-print server for research that has not yet been peer reviewed.]

Lyft Has Completed 55,000 Self Driving Rides in Las Vegas
Christine Fisher | Engadget
“One year ago, Lyft launched its self-driving ride service in Las Vegas. Today, the company announced its 30-vehicle fleet has made 55,000 trips. That makes it the largest commercial program of its kind in the US.”

Flying Car Startup Alaka’i Bets Hydrogen Can Outdo Batteries
Eric Adams | Wired
“Alaka’i says the final product will be able to fly for up to four hours and cover 400 miles on a single load of fuel, which can be replenished in 10 minutes at a hydrogen fueling station. It has built a functional, full-scale prototype that will make its first flight ‘imminently,’ a spokesperson says.”

The World Economic Forum Wants to Develop Global Rules for AI
Will Knight | MIT Technology Review
“This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled?”

Building a Rocket in a Garage to Take on SpaceX and Blue Origin
Jackson Ryan | CNET
“While billionaire entrepreneurs like SpaceX’s Elon Musk and Blue Origin’s Jeff Bezos push the boundaries of human spaceflight and exploration, a legion of smaller private startups around the world have been developing their own rocket technology to launch lighter payloads into orbit.”

Image Credit: Kevin Crosby / Unsplash Continue reading

Posted in Human Robots

#435140 This Week’s Awesome Stories From ...

Gene Therapy Might Have Its First Blockbuster
Antonio Regalado | MIT Technology Review
“…drug giant Novartis expects to win approval to launch what it says will be the first ‘blockbuster’ gene-replacement treatment. A blockbuster is any drug with more than $1 billion in sales each year. The treatment, called Zolgensma, is able to save infants born with spinal muscular atrophy (SMA) type 1, a degenerative disease that usually kills within two years.”

AI Took a Test to Detect Lung Cancer. It Got an A.
Denise Grady | The New York Times
“Computers were as good or better than doctors at detecting tiny lung cancers on CT scans, in a study by researchers from Google and several medical centers. The technology is a work in progress, not ready for widespread use, but the new report, published Monday in the journal Nature Medicine, offers a glimpse of the future of artificial intelligence in medicine.”

The Rise and Reign of Starship, the World’s First Robotic Delivery Provider
Luke Dormehl | Digital Trends
“[Starship’s] delivery robots have travelled a combined 200,000 miles, carried out 50,000 deliveries, and been tested in over 100 cities in 20 countries. It is a regular fixture not just in multiple neighborhoods but also university campuses.”

Elon Musk Just Ignited the Race to Build the Space Internet
Jonathan O’Callaghan | Wired
“It’s estimated that about 3.3 billion people lack access to the internet, but Elon Musk is trying to change that. On Thursday, May 23—after two cancelled launches the week before—SpaceX launched 60 Starlink satellites on a Falcon 9 rocket from Cape Canaveral, in Florida, as part of the firm’s mission to bring low-cost, high-speed internet to the world.”

The iPod of VR Is Here, and You Should Try It
Mark Wilson | Fast Company
“In nearly 15 years of writing about cutting-edge technology, I’ve never seen a single product line get so much better so fast. With [the Oculus] Quest, there are no PCs required. There are no wires to run. All you do is grab the cloth headset and pull it around your head.”

Impossible Foods’ Rising Empire of Almost Meat
Chris Ip | Engadget
“Impossible says it wants to ultimately create a parallel universe of ersatz animal products from steak to eggs. …Yet as Impossible ventures deeper into the culinary uncanny valley, it also needs society to discard a fundamental cultural idea that dates back millennia and accept a new truth: Meat doesn’t have to come from animals.”

Can We Live Longer but Stay Younger?
Adam Gopnik | The New Yorker
“With greater longevity, the quest to avoid the infirmities of aging is more urgent than ever.”

Facial Recognition Has Already Reached Its Breaking Point
Lily Hay Newman | Wired
“As facial recognition technologies have evolved from fledgling projects into powerful software platforms, researchers and civil liberties advocates have been issuing warnings about the potential for privacy erosions. Those mounting fears came to a head Wednesday in Congress.”

Image Credit: Andrush / Shutterstock.com Continue reading

Posted in Human Robots

#434837 In Defense of Black Box AI

Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?

In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.

Edd Gent: What’s your experience with black box algorithms?

Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.

I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.

Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.

EG: What made you feel like you had to mount a defense of these black box algorithms?

EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.

It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.

It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.

EG: In what situations do you think we should be using black box algorithms?

EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.

There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.

But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.

The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.

What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.

EG: Do you think there’s been too much emphasis on interpretability?

EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.

I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.

Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.

EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?

EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.

With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.

So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.

*Responses have been edited for length and style

Image Credit: Chingraph / Shutterstock.com Continue reading

Posted in Human Robots

#434781 What Would It Mean for AI to Become ...

As artificial intelligence systems take on more tasks and solve more problems, it’s hard to say which is rising faster: our interest in them or our fear of them. Futurist Ray Kurzweil famously predicted that “By 2029, computers will have emotional intelligence and be convincing as people.”

We don’t know how accurate this prediction will turn out to be. Even if it takes more than 10 years, though, is it really possible for machines to become conscious? If the machines Kurzweil describes say they’re conscious, does that mean they actually are?

Perhaps a more relevant question at this juncture is: what is consciousness, and how do we replicate it if we don’t understand it?

In a panel discussion at South By Southwest titled “How AI Will Design the Human Future,” experts from academia and industry discussed these questions and more.

Wait, What Is AI?
Most of AI’s recent feats—diagnosing illnesses, participating in debate, writing realistic text—involve machine learning, which uses statistics to find patterns in large datasets then uses those patterns to make predictions. However, “AI” has been used to refer to everything from basic software automation and algorithms to advanced machine learning and deep learning.

“The term ‘artificial intelligence’ is thrown around constantly and often incorrectly,” said Jennifer Strong, a reporter at the Wall Street Journal and host of the podcast “The Future of Everything.” Indeed, one study found that 40 percent of European companies that claim to be working on or using AI don’t actually use it at all.

Dr. Peter Stone, associate chair of computer science at UT Austin, was the study panel chair on the 2016 One Hundred Year Study on Artificial Intelligence (or AI100) report. Based out of Stanford University, AI100 is studying and anticipating how AI will impact our work, our cities, and our lives.

“One of the first things we had to do was define AI,” Stone said. They defined it as a collection of different technologies inspired by the human brain to be able to perceive their surrounding environment and figure out what actions to take given these inputs.

Modeling on the Unknown
Here’s the crazy thing about that definition (and about AI itself): we’re essentially trying to re-create the abilities of the human brain without having anything close to a thorough understanding of how the human brain works.

“We’re starting to pair our brains with computers, but brains don’t understand computers and computers don’t understand brains,” Stone said. Dr. Heather Berlin, cognitive neuroscientist and professor of psychiatry at the Icahn School of Medicine at Mount Sinai, agreed. “It’s still one of the greatest mysteries how this three-pound piece of matter can give us all our subjective experiences, thoughts, and emotions,” she said.

This isn’t to say we’re not making progress; there have been significant neuroscience breakthroughs in recent years. “This has been the stuff of science fiction for a long time, but now there’s active work being done in this area,” said Amir Husain, CEO and founder of Austin-based AI company Spark Cognition.

Advances in brain-machine interfaces show just how much more we understand the brain now than we did even a few years ago. Neural implants are being used to restore communication or movement capabilities in people who’ve been impaired by injury or illness. Scientists have been able to transfer signals from the brain to prosthetic limbs and stimulate specific circuits in the brain to treat conditions like Parkinson’s, PTSD, and depression.

But much of the brain’s inner workings remain a deep, dark mystery—one that will have to be further solved if we’re ever to get from narrow AI, which refers to systems that can perform specific tasks and is where the technology stands today, to artificial general intelligence, or systems that possess the same intelligence level and learning capabilities as humans.

The biggest question that arises here, and one that’s become a popular theme across stories and films, is if machines achieve human-level general intelligence, does that also mean they’d be conscious?

Wait, What Is Consciousness?
As valuable as the knowledge we’ve accumulated about the brain is, it seems like nothing more than a collection of disparate facts when we try to put it all together to understand consciousness.

“If you can replace one neuron with a silicon chip that can do the same function, then replace another neuron, and another—at what point are you still you?” Berlin asked. “These systems will be able to pass the Turing test, so we’re going to need another concept of how to measure consciousness.”

Is consciousness a measurable phenomenon, though? Rather than progressing by degrees or moving through some gray area, isn’t it pretty black and white—a being is either conscious or it isn’t?

This may be an outmoded way of thinking, according to Berlin. “It used to be that only philosophers could study consciousness, but now we can study it from a scientific perspective,” she said. “We can measure changes in neural pathways. It’s subjective, but depends on reportability.”

She described three levels of consciousness: pure subjective experience (“Look, the sky is blue”), awareness of one’s own subjective experience (“Oh, it’s me that’s seeing the blue sky”), and relating one subjective experience to another (“The blue sky reminds me of a blue ocean”).

“These subjective states exist all the way down the animal kingdom. As humans we have a sense of self that gives us another depth to that experience, but it’s not necessary for pure sensation,” Berlin said.

Husain took this definition a few steps farther. “It’s this self-awareness, this idea that I exist separate from everything else and that I can model myself,” he said. “Human brains have a wonderful simulator. They can propose a course of action virtually, in their minds, and see how things play out. The ability to include yourself as an actor means you’re running a computation on the idea of yourself.”

Most of the decisions we make involve envisioning different outcomes, thinking about how each outcome would affect us, and choosing which outcome we’d most prefer.

“Complex tasks you want to achieve in the world are tied to your ability to foresee the future, at least based on some mental model,” Husain said. “With that view, I as an AI practitioner don’t see a problem implementing that type of consciousness.”

Moving Forward Cautiously (But Not too Cautiously)
To be clear, we’re nowhere near machines achieving artificial general intelligence or consciousness, and whether a “conscious machine” is possible—not to mention necessary or desirable—is still very much up for debate.

As machine intelligence continues to advance, though, we’ll need to walk the line between progress and risk management carefully.

Improving the transparency and explainability of AI systems is one crucial goal AI developers and researchers are zeroing in on. Especially in applications that could mean the difference between life and death, AI shouldn’t advance without people being able to trace how it’s making decisions and reaching conclusions.

Medicine is a prime example. “There are already advances that could save lives, but they’re not being used because they’re not trusted by doctors and nurses,” said Stone. “We need to make sure there’s transparency.” Demanding too much transparency would also be a mistake, though, because it will hinder the development of systems that could at best save lives and at worst improve efficiency and free up doctors to have more face time with patients.

Similarly, self-driving cars have great potential to reduce deaths from traffic fatalities. But even though humans cause thousands of deadly crashes every day, we’re terrified by the idea of self-driving cars that are anything less than perfect. “If we only accept autonomous cars when there’s zero probability of an accident, then we will never accept them,” Stone said. “Yet we give 16-year-olds the chance to take a road test with no idea what’s going on in their brains.”

This brings us back to the fact that, in building tech modeled after the human brain—which has evolved over millions of years—we’re working towards an end whose means we don’t fully comprehend, be it something as basic as choosing when to brake or accelerate or something as complex as measuring consciousness.

“We shouldn’t charge ahead and do things just because we can,” Stone said. “The technology can be very powerful, which is exciting, but we have to consider its implications.”

Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots