Tag Archives: tiny

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots

#435159 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Can Now Beat Us at Multiplayer Games Too
Cade Metz | The New York Times
“DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.”

ROBOTICS
Tiny Robots Carry Stem Cells Through a Mouse
Emily Waltz | IEEE Spectrum
“Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper, published [May 29] in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.” [Video shows microbots navigating a microfluidic chip. MRI could not be used to image the mouse as the bots navigate magnetically.]

COMPUTING
How a Quantum Computer Could Break 2048-Bit RSA Encryption in 8 Hours
Emerging Technology From the arXiv | MIT Technology Review
“[Two researchers] have found a more efficient way for quantum computers to perform the code-breaking calculations, reducing the resources they require by orders of magnitude. Consequently, these machines are significantly closer to reality than anyone suspected.” [The arXiv is a pre-print server for research that has not yet been peer reviewed.]

AUTOMATION
Lyft Has Completed 55,000 Self Driving Rides in Las Vegas
Christine Fisher | Engadget
“One year ago, Lyft launched its self-driving ride service in Las Vegas. Today, the company announced its 30-vehicle fleet has made 55,000 trips. That makes it the largest commercial program of its kind in the US.”

TRANSPORTATION
Flying Car Startup Alaka’i Bets Hydrogen Can Outdo Batteries
Eric Adams | Wired
“Alaka’i says the final product will be able to fly for up to four hours and cover 400 miles on a single load of fuel, which can be replenished in 10 minutes at a hydrogen fueling station. It has built a functional, full-scale prototype that will make its first flight ‘imminently,’ a spokesperson says.”

ETHICS
The World Economic Forum Wants to Develop Global Rules for AI
Will Knight | MIT Technology Review
“This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled?”

SPACE
Building a Rocket in a Garage to Take on SpaceX and Blue Origin
Jackson Ryan | CNET
“While billionaire entrepreneurs like SpaceX’s Elon Musk and Blue Origin’s Jeff Bezos push the boundaries of human spaceflight and exploration, a legion of smaller private startups around the world have been developing their own rocket technology to launch lighter payloads into orbit.”

Image Credit: Kevin Crosby / Unsplash Continue reading

Posted in Human Robots

#435140 This Week’s Awesome Stories From ...

GENETICS
Gene Therapy Might Have Its First Blockbuster
Antonio Regalado | MIT Technology Review
“…drug giant Novartis expects to win approval to launch what it says will be the first ‘blockbuster’ gene-replacement treatment. A blockbuster is any drug with more than $1 billion in sales each year. The treatment, called Zolgensma, is able to save infants born with spinal muscular atrophy (SMA) type 1, a degenerative disease that usually kills within two years.”

ARTIFICIAL INTELLIGENCE
AI Took a Test to Detect Lung Cancer. It Got an A.
Denise Grady | The New York Times
“Computers were as good or better than doctors at detecting tiny lung cancers on CT scans, in a study by researchers from Google and several medical centers. The technology is a work in progress, not ready for widespread use, but the new report, published Monday in the journal Nature Medicine, offers a glimpse of the future of artificial intelligence in medicine.”

ROBOTICS
The Rise and Reign of Starship, the World’s First Robotic Delivery Provider
Luke Dormehl | Digital Trends
“[Starship’s] delivery robots have travelled a combined 200,000 miles, carried out 50,000 deliveries, and been tested in over 100 cities in 20 countries. It is a regular fixture not just in multiple neighborhoods but also university campuses.”

SPACE
Elon Musk Just Ignited the Race to Build the Space Internet
Jonathan O’Callaghan | Wired
“It’s estimated that about 3.3 billion people lack access to the internet, but Elon Musk is trying to change that. On Thursday, May 23—after two cancelled launches the week before—SpaceX launched 60 Starlink satellites on a Falcon 9 rocket from Cape Canaveral, in Florida, as part of the firm’s mission to bring low-cost, high-speed internet to the world.”

VIRTUAL REALITY
The iPod of VR Is Here, and You Should Try It
Mark Wilson | Fast Company
“In nearly 15 years of writing about cutting-edge technology, I’ve never seen a single product line get so much better so fast. With [the Oculus] Quest, there are no PCs required. There are no wires to run. All you do is grab the cloth headset and pull it around your head.”

FUTURE OF FOOD
Impossible Foods’ Rising Empire of Almost Meat
Chris Ip | Engadget
“Impossible says it wants to ultimately create a parallel universe of ersatz animal products from steak to eggs. …Yet as Impossible ventures deeper into the culinary uncanny valley, it also needs society to discard a fundamental cultural idea that dates back millennia and accept a new truth: Meat doesn’t have to come from animals.”

LONGEVITY
Can We Live Longer but Stay Younger?
Adam Gopnik | The New Yorker
“With greater longevity, the quest to avoid the infirmities of aging is more urgent than ever.”

PRIVACY
Facial Recognition Has Already Reached Its Breaking Point
Lily Hay Newman | Wired
“As facial recognition technologies have evolved from fledgling projects into powerful software platforms, researchers and civil liberties advocates have been issuing warnings about the potential for privacy erosions. Those mounting fears came to a head Wednesday in Congress.”

Image Credit: Andrush / Shutterstock.com Continue reading

Posted in Human Robots

#435127 Teaching AI the Concept of ‘Similar, ...

As a human you instinctively know that a leopard is closer to a cat than a motorbike, but the way we train most AI makes them oblivious to these kinds of relations. Building the concept of similarity into our algorithms could make them far more capable, writes the author of a new paper in Science Robotics.

Convolutional neural networks have revolutionized the field of computer vision to the point that machines are now outperforming humans on some of the most challenging visual tasks. But the way we train them to analyze images is very different from the way humans learn, says Atsuto Maki, an associate professor at KTH Royal Institute of Technology.

“Imagine that you are two years old and being quizzed on what you see in a photo of a leopard,” he writes. “You might answer ‘a cat’ and your parents might say, ‘yeah, not quite but similar’.”

In contrast, the way we train neural networks rarely gives that kind of partial credit. They are typically trained to have very high confidence in the correct label and consider all incorrect labels, whether ”cat” or “motorbike,” equally wrong. That’s a mistake, says Maki, because ignoring the fact that something can be “less wrong” means you’re not exploiting all of the information in the training data.

Even when models are trained this way, there will be small differences in the probabilities assigned to incorrect labels that can tell you a lot about how well the model can generalize what it has learned to unseen data.

If you show a model a picture of a leopard and it gives “cat” a probability of five percent and “motorbike” one percent, that suggests it picked up on the fact that a cat is closer to a leopard than a motorbike. In contrast, if the figures are the other way around it means the model hasn’t learned the broad features that make cats and leopards similar, something that could potentially be helpful when analyzing new data.

If we could boost this ability to identify similarities between classes we should be able to create more flexible models better able to generalize, says Maki. And recent research has demonstrated how variations of an approach called regularization might help us achieve that goal.

Neural networks are prone to a problem called “overfitting,” which refers to a tendency to pay too much attention to tiny details and noise specific to their training set. When that happens, models will perform excellently on their training data but poorly when applied to unseen test data without these particular quirks.

Regularization is used to circumvent this problem, typically by reducing the network’s capacity to learn all this unnecessary information and therefore boost its ability to generalize to new data. Techniques are varied, but generally involve modifying the network’s structure or the strength of the weights between artificial neurons.

More recently, though, researchers have suggested new regularization approaches that work by encouraging a broader spread of probabilities across all classes. This essentially helps them capture more of the class similarities, says Maki, and therefore boosts their ability to generalize.

One such approach was devised in 2017 by Google Brain researchers, led by deep learning pioneer Geoffrey Hinton. They introduced a penalty to their training process that directly punished overconfident predictions in the model’s outputs, and a technique called label smoothing that prevents the largest probability becoming much larger than all others. This meant the probabilities were lower for correct labels and higher for incorrect ones, which was found to boost performance of models on varied tasks from image classification to speech recognition.

Another came from Maki himself in 2017 and achieves the same goal, but by suppressing high values in the model’s feature vector—the mathematical construct that describes all of an object’s important characteristics. This has a knock-on effect on the spread of output probabilities and also helped boost performance on various image classification tasks.

While it’s still early days for the approach, the fact that humans are able to exploit these kinds of similarities to learn more efficiently suggests that models that incorporate them hold promise. Maki points out that it could be particularly useful in applications such as robotic grasping, where distinguishing various similar objects is important.

Image Credit: Marianna Kalashnyk / Shutterstock.com Continue reading

Posted in Human Robots

#434843 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Open AI’s Dota 2 AI Steamrolls World Champion e-Sports Team With Back-to-Back Victories
Nick Statt | The Verge
“…[OpenAI cofounder and CEO, Sam Altman] tells me there probably does not exist a video game out there right now that a system like OpenAI Five can’t eventually master at a level beyond human capability. For the broader AI industry, mastering video games may soon become passé, simple table stakes required to prove your system can learn fast and act in a way required to tackle tougher, real-world tasks with more meaningful benefits.”

ROBOTICS
Boston Dynamics Debuts the Production Version of SpotMini
Brian Heater, Catherine Shu | TechCrunch
“SpotMini is the first commercial robot Boston Dynamics is set to release, but as we learned earlier, it certainly won’t be the last. The company is looking to its wheeled Handle robot in an effort to push into the logistics space. It’s a super-hot category for robotics right now. Notably, Amazon recently acquired Colorado-based start up Canvas to add to its own arm of fulfillment center robots.”

NEUROSCIENCE
Scientists Restore Some Brain Cell Functions in Pigs Four Hours After Death
Joel Achenbach | The Washington Post
“The ethicists say this research can blur the line between life and death, and could complicate the protocols for organ donation, which rely on a clear determination of when a person is dead and beyond resuscitation.”

BIOTECH
How Scientists 3D Printed a Tiny Heart From Human Cells
Yasmin Saplakoglu | Live Science
“Though the heart is much smaller than a human’s (it’s only the size of a rabbit’s), and there’s still a long way to go until it functions like a normal heart, the proof-of-concept experiment could eventually lead to personalized organs or tissues that could be used in the human body…”

SPACE
The Next Clash of Silicon Valley Titans Will Take Place in Space
Luke Dormehl | Digital Trends
“With bold plans that call for thousands of new satellites being put into orbit and astronomical costs, it’s going to be fascinating to observe the next phase of the tech platform battle being fought not on our desktops or mobile devices in our pockets, but outside of Earth’s atmosphere.”

FUTURE HISTORY
The Images That Could Help Rebuild Notre-Dame Cathedral
Alexis C. Madrigal | The Atlantic
“…in 2010, [Andrew] Tallon, an art professor at Vassar, took a Leica ScanStation C10 to Notre-Dame and, with the assistance of Columbia’s Paul Blaer, began to painstakingly scan every piece of the structure, inside and out. …Over five days, they positioned the scanner again and again—50 times in all—to create an unmatched record of the reality of one of the world’s most awe-inspiring buildings, represented as a series of points in space.”

AUGMENTED REALITY
Mapping Our World in 3D Will Let Us Paint Streets With Augmented Reality
Charlotte Jee | MIT Technology Review
“Scape wants to use its location services to become the underlying infrastructure upon which driverless cars, robotics, and augmented-reality services sit. ‘Our end goal is a one-to-one map of the world covering everything,’ says Miller. ‘Our ambition is to be as invisible as GPS is today.’i”

Image Credit: VAlex / Shutterstock.com Continue reading

Posted in Human Robots