Tag Archives: parts

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots

#435313 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Microsoft Invests $1 Billion in OpenAI to Pursue Holy Grail of Artificial Intelligence
James Vincent | The Verge
“i‘The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,’ said [OpenAI cofounder] Sam Altman. ‘Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.’i”

ROBOTICS
UPS Wants to Go Full-Scale With Its Drone Deliveries
Eric Adams | Wired
“If UPS gets its way, it’ll be known for vehicles other than its famous brown vans. The delivery giant is working to become the first commercial entity authorized by the Federal Aviation Administration to use autonomous delivery drones without any of the current restrictions that have governed the aerial testing it has done to date.”

SYNTHETIC BIOLOGY
Scientists Can Finally Build Feedback Circuits in Cells
Megan Molteni | Wired
“Network a few LOCKR-bound molecules together, and you’ve got a circuit that can control a cell’s functions the same way a PID computer program automatically adjusts the pitch of a plane. With the right key, you can make cells glow or blow themselves apart. You can send things to the cell’s trash heap or zoom them to another cellular zip code.”

ENERGY
Carbon Nanotubes Could Increase Solar Efficiency to 80 Percent
David Grossman | Popular Mechanics
“Obviously, that sort of efficiency rating is unheard of in the world of solar panels. But even though a proof of concept is a long way from being used in the real world, any further developments in the nanotubes could bolster solar panels in ways we haven’t seen yet.”

FUTURE
What Technology Is Most Likely to Become Obsolete During Your Lifetime?
Daniel Kolitz | Gizmodo
“Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. …And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.”

NEUROSCIENCE
The Human Brain Project Hasn’t Lived Up to Its Promise
Ed Yong | The Atlantic
“The HBP, then, is in a very odd position, criticized for being simultaneously too grandiose and too narrow. None of the skeptics I spoke with was dismissing the idea of simulating parts of the brain, but all of them felt that such efforts should be driven by actual research questions. …Countless such projects could have been funded with the money channeled into the HBP, which explains much of the furor around the project.”

Image Credit: Aron Van de Pol / Unsplash Continue reading

Posted in Human Robots

#435260 How Tech Can Help Curb Emissions by ...

Trees are a low-tech, high-efficiency way to offset much of humankind’s negative impact on the climate. What’s even better, we have plenty of room for a lot more of them.

A new study conducted by researchers at Switzerland’s ETH-Zürich, published in Science, details how Earth could support almost an additional billion hectares of trees without the new forests pushing into existing urban or agricultural areas. Once the trees grow to maturity, they could store more than 200 billion metric tons of carbon.

Great news indeed, but it still leaves us with some huge unanswered questions. Where and how are we going to plant all the new trees? What kind of trees should we plant? How can we ensure that the new forests become a boon for people in those areas?

Answers to all of the above likely involve technology.

Math + Trees = Challenges
The ETH-Zürich research team combined Google Earth mapping software with a database of nearly 80,000 existing forests to create a predictive model for optimal planting locations. In total, 0.9 billion hectares of new, continuous forest could be planted. Once mature, the 500 billion new trees in these forests would be capable of storing about two-thirds of the carbon we have emitted since the industrial revolution.

Other researchers have noted that the study may overestimate how efficient trees are at storing carbon, as well as underestimate how much carbon humans have emitted over time. However, all seem to agree that new forests would offset much of our cumulative carbon emissions—still an impressive feat as the target of keeping global warming this century at under 1.5 degrees Celsius becomes harder and harder to reach.

Recently, there was a story about a Brazilian couple who replanted trees in the valley where they live. The couple planted about 2.7 million trees in two decades. Back-of-the-napkin math shows that they on average planted 370 trees a day, meaning planting 500 billion trees would take about 3.7 million years. While an over-simplification, the point is that planting trees by hand is not realistic. Even with a million people going at a rate of 370 trees a day, it would take 83 years. Current technologies are also not likely to be able to meet the challenge, especially in remote locations.

Tree-Bombing Drones
Technology can speed up the planting process, including a new generation of drones that take tree planting to the skies. Drone planting generally involves dropping biodegradable seed pods at a designated area. The pods dissolve over time, and the tree seeds grow in the earth below. DroneSeed is one example; its 55-pound drones can plant up to 800 seeds an hour. Another startup, Biocarbon Engineering, has used various techniques, including drones, to plant 38 different species of trees across three continents.

Drone planting has distinct advantages when it comes to planting in hard-to-access areas—one example is mangrove forests, which are disappearing rapidly, increasing the risk of floods and storm surges.

Challenges include increasing the range and speed of drone planting, and perhaps most importantly, the success rate, as automatic planting from a height is still likely to be less accurate when it comes to what depth the tree saplings are planted. However, drones are already showing impressive numbers for sapling survival rates.

AI, Sensors, and Eye-In-the-Sky
Planting the trees is the first step in a long road toward an actual forest. Companies are leveraging artificial intelligence and satellite imagery in a multitude of ways to increase protection and understanding of forested areas.

20tree.ai, a Portugal-based startup, uses AI to analyze satellite imagery and monitor the state of entire forests at a fraction of the cost of manual monitoring. The approach can lead to faster identification of threats like pest infestation and a better understanding of the state of forests.

AI can also play a pivotal role in protecting existing forest areas by predicting where deforestation is likely to occur.

Closer to the ground—and sometimes in it—new networks of sensors can provide detailed information about the state and needs of trees. One such project is Trace, where individual trees are equipped with a TreeTalker, an internet of things-based device that can provide real-time monitoring of the tree’s functions and well-being. The information can be used to, among other things, optimize the use of available resources, such as providing the exact amount of water a tree needs.

Budding Technologies Are Controversial
Trees are in many ways fauna’s marathon runners—slow-growing and sturdy, but still susceptible to sickness and pests. Many deforested areas are likely not as rich in nutrients as they once were, which could slow down reforestation. Much of the positive impact that said trees could have on carbon levels in the atmosphere is likely decades away.

Bioengineering, for example through CRISPR, could provide solutions, making trees more resistant and faster-growing. Such technologies are being explored in relation to Ghana’s at-risk cocoa trees. Other exponential technologies could also hold much future potential—for instance micro-robots to assist the dwindling number of bees with pollination.

These technologies remain mired in controversy, and perhaps rightfully so. Bioengineering’s massive potential is for many offset by the inherent risks of engineered plants out-competing existing fauna or growing beyond our control. Micro-robots for pollination may solve a problem, but don’t do much to address the root cause: that we seem to be disrupting and destroying integral parts of natural cycles.

Tech Not The Whole Answer
So, is it realistic to plant 500 billion new trees? The short answer would be that yes, it’s possible—with the help of technology.

However, there are many unanswered challenges. For example, many of areas identified by the ETH-Zürich research team are not readily available for reforestation. Some are currently reserved for grazing, others owned by private entities, and others again are located in remote areas or areas prone to political instability, beyond the reach of most replanting efforts.

If we do wish to plant 500 billion trees to offset some of the negative impacts we have had on the planet, we might well want to combine the best of exponential technology with reforestation as well as a move to other forms of agriculture.

Such an approach might also help address a major issue: that few of the proposed new forests will likely succeed without ensuring that people living in and around the areas where reforestation takes place become involved, and can reap rewards from turning arable land into forests.

Image Credit: Lillac/Shutterstock.com Continue reading

Posted in Human Robots

#435110 5 Coming Breakthroughs in Energy and ...

The energy and transportation industries are being aggressively disrupted by converging exponential technologies.

In just five days, the sun provides Earth with an energy supply exceeding all proven reserves of oil, coal, and natural gas. Capturing just 1 part in 8,000 of this available solar energy would allow us to meet 100 percent of our energy needs.

As we leverage renewable energy supplied by the sun, wind, geothermal sources, and eventually fusion, we are rapidly heading towards a future where 100 percent of our energy needs will be met by clean tech in just 30 years.

During the past 40 years, solar prices have dropped 250-fold. And as these costs plummet, solar panel capacity continues to grow exponentially.

On the heels of energy abundance, we are additionally witnessing a new transportation revolution, which sets the stage for a future of seamlessly efficient travel at lower economic and environmental costs.

Top 5 Transportation Breakthroughs (2019-2024)
Entrepreneur and inventor Ramez Naam is my go-to expert on all things energy and environment. Currently serving as the Energy Co-Chair at Singularity University, Naam is the award-winning author of five books, including the Nexus series of science fiction novels. Having spent 13 years at Microsoft, his software has touched the lives of over a billion people. Naam holds over 20 patents, including several shared with co-inventor Bill Gates.

In the next five years, he forecasts five respective transportation and energy trends, each poised to disrupt major players and birth entirely new business models.

Let’s dive in.

Autonomous cars drive 1 billion miles on US roads. Then 10 billion

Alphabet’s Waymo alone has already reached 10 million miles driven in the US. The 600 Waymo vehicles on public roads drive a total of 25,000 miles each day, and computer simulations provide an additional 25,000 virtual cars driving constantly. Since its launch in December, the Waymo One service has transported over 1,000 pre-vetted riders in the Phoenix area.

With more training miles, the accuracy of these cars continues to improve. Since last year, GM Cruise has improved its disengagement rate by 321 percent since last year, trailing close behind with only one human intervention per 5,025 miles self-driven.

Autonomous taxis as a service in top 20 US metro areas

Along with its first quarterly earnings released last week, Lyft recently announced that it would expand its Waymo partnership with the upcoming deployment of 10 autonomous vehicles in the Phoenix area. While individuals previously had to partake in Waymo’s “early rider program” prior to trying Waymo One, the Lyft partnership will allow anyone to ride in a self-driving vehicle without a prior NDA.

Strategic partnerships will grow increasingly essential between automakers, self-driving tech companies, and rideshare services. Ford is currently working with Volkswagen, and Nvidia now collaborates with Daimler (Mercedes) and Toyota. Just last week, GM Cruise raised another $1.15 billion at a $19 billion valuation as the company aims to launch a ride-hailing service this year.

“They’re going to come to the Bay Area, Los Angeles, Houston, other cities with relatively good weather,” notes Naam. “In every major city within five years in the US and in some other parts of the world, you’re going to see the ability to hail an autonomous vehicle as a ride.”

Cambrian explosion of vehicle formats

Naam explains, “If you look today at the average ridership of a taxi, a Lyft, or an Uber, it’s about 1.1 passengers plus the driver. So, why do you need a large four-seater vehicle for that?”

Small electric, autonomous pods that seat as few as two people will begin to emerge, satisfying the majority of ride-hailing demands we see today. At the same time, larger communal vehicles will appear, such as Uber Express, that will undercut even the cheapest of transportation methods—buses, trams, and the like. Finally, last-mile scooter transit (or simply short-distance walks) might connect you to communal pick-up locations.

By 2024, an unimaginably diverse range of vehicles will arise to meet every possible need, regardless of distance or destination.

Drone delivery for lightweight packages in at least one US city

Wing, the Alphabet drone delivery startup, recently became the first company to gain approval from the Federal Aviation Administration (FAA) to make deliveries in the US. Having secured approval to deliver to 100 homes in Canberra, Australia, Wing additionally plans to begin delivering goods from local businesses in the suburbs of Virginia.

The current state of drone delivery is best suited for lightweight, urgent-demand payloads like pharmaceuticals, thumb drives, or connectors. And as Amazon continues to decrease its Prime delivery times—now as speedy as a one-day turnaround in many cities—the use of drones will become essential.

Robotic factories drive onshoring of US factories… but without new jobs

The supply chain will continue to shorten and become more agile with the re-onshoring of manufacturing jobs in the US and other countries. Naam reasons that new management and software jobs will drive this shift, as these roles develop the necessary robotics to manufacture goods. Equally as important, these robotic factories will provide a more humane setting than many of the current manufacturing practices overseas.

Top 5 Energy Breakthroughs (2019-2024)

First “1 cent per kWh” deals for solar and wind signed

Ten years ago, the lowest price of solar and wind power fell between 10 to 12 cents per kilowatt hour (kWh), over twice the price of wholesale power from coal or natural gas.

Today, the gap between solar/wind power and fossil fuel-generated electricity is nearly negligible in many parts of the world. In G20 countries, fossil fuel electricity costs between 5 to 17 cents per kWh, while the average cost per kWh of solar power in the US stands at under 10 cents.

Spanish firm Solarpack Corp Technological recently won a bid in Chile for a 120 MW solar power plant supplying energy at 2.91 cents per kWh. This deal will result in an estimated 25 percent drop in energy costs for Chilean businesses by 2021.

Naam indicates, “We will see the first unsubsidized 1.0 cent solar deals in places like Chile, Mexico, the Southwest US, the Middle East, and North Africa, and we’ll see similar prices for wind in places like Mexico, Brazil, and the US Great Plains.”

Solar and wind will reach >15 percent of US electricity, and begin to drive all growth

Just over eight percent of energy in the US comes from solar and wind sources. In total, 17 percent of American energy is derived from renewable sources, while a whopping 63 percent is sourced from fossil fuels, and 17 percent from nuclear.

Last year in the U.K., twice as much energy was generated from wind than from coal. For over a week in May, the U.K. went completely coal-free, using wind and solar to supply 35 percent and 21 percent of power, respectively. While fossil fuels remain the primary electricity source, this week-long experiment highlights the disruptive potential of solar and wind power that major countries like the U.K. are beginning to emphasize.

“Solar and wind are still a relatively small part of the worldwide power mix, only about six percent. Within five years, it’s going to be 15 percent in the US and more than close to that worldwide,” Naam predicts. “We are nearing the point where we are not building any new fossil fuel power plants.”

It will be cheaper to build new solar/wind/batteries than to run on existing coal

Last October, Northern Indiana utility company NIPSCO announced its transition from a 65 percent coal-powered state to projected coal-free status by 2028. Importantly, this decision was made purely on the basis of financials, with an estimated $4 billion in cost savings for customers. The company has already begun several initiatives in solar, wind, and batteries.

NextEra, the largest power generator in the US, has taken on a similar goal, making a deal last year to purchase roughly seven million solar panels from JinkoSolar over four years. Leading power generators across the globe have vocalized a similar economic case for renewable energy.

ICE car sales have now peaked. All car sales growth will be electric

While electric vehicles (EV) have historically been more expensive for consumers than internal combustion engine-powered (ICE) cars, EVs are cheaper to operate and maintain. The yearly cost of operating an EV in the US is about $485, less than half the $1,117 cost of operating a gas-powered vehicle.

And as battery prices continue to shrink, the upfront costs of EVs will decline until a long-term payoff calculation is no longer required to determine which type of car is the better investment. EVs will become the obvious choice.

Many experts including Naam believe that ICE-powered vehicles peaked worldwide in 2018 and will begin to decline over the next five years, as has already been demonstrated in the past five months. At the same time, EVs are expected to quadruple their market share to 1.6 percent this year.

New storage technologies will displace Li-ion batteries for tomorrow’s most demanding applications

Lithium ion batteries have dominated the battery market for decades, but Naam anticipates new storage technologies will take hold for different contexts. Flow batteries, which can collect and store solar and wind power at large scales, will supply city grids. Already, California’s Independent System Operator, the nonprofit that maintains the majority of the state’s power grid, recently installed a flow battery system in San Diego.

Solid-state batteries, which consist of entirely solid electrolytes, will supply mobile devices in cars. A growing body of competitors, including Toyota, BMW, Honda, Hyundai, and Nissan, are already working on developing solid-state battery technology. These types of batteries offer up to six times faster charging periods, three times the energy density, and eight years of added lifespan, compared to lithium ion batteries.

Final Thoughts
Major advancements in transportation and energy technologies will continue to converge over the next five years. A case in point, Tesla’s recent announcement of its “robotaxi” fleet exemplifies the growing trend towards joint priority of sustainability and autonomy.

On the connectivity front, 5G and next-generation mobile networks will continue to enable the growth of autonomous fleets, many of which will soon run on renewable energy sources. This growth demands important partnerships between energy storage manufacturers, automakers, self-driving tech companies, and ridesharing services.

In the eco-realm, increasingly obvious economic calculi will catalyze consumer adoption of autonomous electric vehicles. In just five years, Naam predicts that self-driving rideshare services will be cheaper than owning a private vehicle for urban residents. And by the same token, plummeting renewable energy costs will make these fuels far more attractive than fossil fuel-derived electricity.

As universally optimized AI systems cut down on traffic, aggregate time spent in vehicles will decimate, while hours in your (or not your) car will be applied to any number of activities as autonomous systems steer the way. All the while, sharing an electric vehicle will cut down not only on your carbon footprint but on the exorbitant costs swallowed by your previous SUV. How will you spend this extra time and money? What new natural resources will fuel your everyday life?

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: welcomia / Shutterstock.com Continue reading

Posted in Human Robots

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots