Tag Archives: is

#440046 3 Ways Artificial Intelligence Is ...

Source: Shutterstock UPS drivers almost never turn left and you probably shouldn’t either. And that’s not because their CEO is superstitious. With the help of artificial intelligence, they’ve calculated that turning left costs their company 10 million gallons of fuel more and 20,000 tonnes of carbon dioxide more compared to going right or straight. Today, …

The post 3 Ways Artificial Intelligence Is Transforming Logistics appeared first on TFOT. Continue reading

Posted in Human Robots

#439908 Why Facebook (Or Meta) Is Making Tactile ...

Facebook, or Meta as it's now calling itself for some reason that I don't entirely understand, is today announcing some new tactile sensing hardware for robots. Or, new-ish, at least—there's a ruggedized and ultra low-cost GelSight-style fingertip sensor, plus a nifty new kind of tactile sensing skin based on suspended magnetic particles and machine learning. It's cool stuff, but why?
Obviously, Facebook Meta cares about AI, because it uses AI to try and do a whole bunch of the things that it's unwilling or unable to devote the time of actual humans to. And to be fair, there are some things that AI may be better at (or at least more efficient at) than humans are. AI is of course much worse than humans at many, many, many things as well, but that debate goes well beyond Facebook Meta and certainly well beyond the scope of this article, which is about tactile sensing for robots. So why does Facebook Meta care even a little bit about making robots better at touching stuff? Yann LeCun, the Chief AI Scientist at Facebook Meta, takes a crack at explaining it:
Before I joined Facebook, I was chatting with Mark Zuckerberg and I asked him, “is there any area related to AI that you think we shouldn't be working on?” And he said, “I can't find any good reason for us to work on robotics.” And so, that was kind of the start of Facebook AI Research—we were not going to work on robotics.

After a few years, it became clear that a lot of interesting progress in AI was happening in the context of robotics, because robotics is the nexus of where people in AI research are trying to get the full loop of perception, reasoning, planning, and action, and getting feedback from the environment. Doing it in the real world is where the problems are concentrated, and you can't play games if you want robots to learn quickly.

It was clear that four or five years ago, there was no business reason to work on robotics, but the business reasons have kind of popped up. Robotics could be used for telepresence, for maintaining data centers more automatically, but the more important aspect of it is making progress towards intelligent agents, the kinds of things that could be used in the metaverse, in augmented reality, and in virtual reality. That's really one of the raison d'être of a research lab, to foresee the domains that will be important in the future. So that's the motivation.Well, okay, but none of that seems like a good justification for research into tactile sensing specifically. But according to LeCun, it's all about putting together the pieces required for some level of fundamental world understanding, a problem that robotic systems are still bad at and that machine learning has so far not been able to tackle:
How to get machines to learn that model of the world that allows them to predict in advance and plan what's going to happen as a consequence of their actions is really the crux of the problem here. And this is something you have to confront if you work on robotics. But it's also something you have to confront if you want to have an intelligent agent acting in a virtual environment that can interact with humans in a natural way. And one of the long-term visions of augmented reality, for example, is virtual agents that basically are with you all the time, living in your automatic reality glasses or your smartphone or your laptop or whatever, helping you in your daily life as a human assistant would do, but also can answer any question you have. And that system will have to have some degree of understanding of how the world works—some degree of common sense, and be smart enough to not be frustrating to talk to. And that is where all of this research leads in the long run, whether the environment is real or virtual.AI systems (robots included) are very very dumb in very very specific ways, quite often the ways in which humans are least understanding and forgiving of. This is such a well established thing that there's a name for it: Moravec's paradox. Humans are great at subconscious levels of world understanding that we've built up over years and years of experience being, you know, alive. AI systems have none of this, and there isn't necessarily a clear path to getting them there, but one potential approach is to start with the fundamentals in the same way that a shiny new human does and build from there, a process that must necessarily include touch.

The DIGIT touch sensor is based on the GelSight style of sensor, which was first conceptualized at MIT over a decade ago. The basic concept of these kinds of tactile sensors is that they're able to essentially convert a touch problem into a vision problem: an array of LEDs illuminate a squishy finger pad from the back, and when the squishy finger pad pushes against something with texture, that texture squishes through to the other side of the finger pad where it's illuminated from many different angles by the LEDs. A camera up inside of the finger takes video of this, resulting in a very rainbow but very detailed picture of whatever the finger pad is squishing against.

The DIGIT paper published last year summarizes the differences between this new sensor and previous versions of GelSight:

DIGIT improves over existing GelSight sensors in several ways: by providing a more compact form factor that can be used on multi-finger hands, improving the durability of the elastomer gel, and making design changes that facilitate large-scale, repeatable production of the sensor hardware to facilitate tactile sensing research.
DIGIT is open source, so you can make one on your own, but that's a hassle. The really big news here is that GelSight itself (an MIT spinoff which commercialized the original technology) will be commercially manufacturing DIGIT sensors, providing a standardized and low-cost option for tactile sensing. The bill of materials for each DIGIT sensor is about US $15 if you were to make a thousand of them, so we're expecting that the commercial version won't cost much more than that.

The other hardware announcement is ReSkin, a tactile sensing skin developed in collaboration with Carnegie Mellon. Like DIGIT, the idea is to make an open source, robust, and very low cost system that will allow researchers to focus on developing the software to help robots make sense of touch rather than having to waste time on their own hardware.
ReSkin operates on a fairly simple concept: it's a flexible sheet of 2mm thick silicone with magnetic particles carelessly mixed in. The sheet sits on top of a magnetometer, and whenever the sheet deforms (like if something touches it), the magnetic particles embedded in the sheet get squooshed and the magnetic signal changes, which is picked up by the magnetometer. For this to work, the sheet doesn't have to be directly connected to said magnetometer. This is key, because it makes the part of the ReSkin sensor that's most likely to get damaged super easy to replace—just peel it off and slap on another one and you're good to go.

I get that touch is an integral part of this humanish world understanding that Facebook Meta is working towards, but for most of us, the touch is much more nuanced than just tactile data collection, because we experience everything that we touch within the world understanding that we've built up through integration of all of our other senses as well. I asked Roberto Calandra, one of the authors of the paper on DIGIT, what he thought about this:
I believe that we certainly want to have multimodal sensing in the same way that humans do. Humans use cues from touch, cues from vision, and also cues from audio, and we are able to very smartly put these different sensor modalities together. And if I tell you, can you imagine how touching this object is going to feel for you, can sort of imagine that. You can also tell me the shape of something that you are touching, you are able to somehow recognize it. So there is very clearly a multisensorial representation that we are learning and using as humans, and it's very likely that this is also going to be very important for embodied agents that we want to develop and deploy.Calandra also noted that they still have plenty of work to do to get DIGIT closer in form factor and capability to a human finger, which is an aspiration that I often hear from roboticists. But I always wonder: why bother? Like, why constrain robots (which can do all kinds of things that humans cannot) to do things in a human-like way, when we can instead leverage creative sensing and actuation to potentially give them superhuman capabilities? Here's what Calandra thinks:
I don't necessarily believe that a human hand is the way to go. I do believe that the human hand is possibly the golden standard that we should compare against. Can we do at least as good as a human hand? Beyond that, I actually do believe that over the years, the decades, or maybe the centuries, robots will have the possibility of developing superhuman hardware, in the same way that we can put infrared sensors or laser scanners on a robot, why shouldn't we also have mechanical hardware which is superior?
I think there has been a lot of really cool work on soft robotics for example, on how to build tentacles that can imitate an octopus. So it's a very natural question—if we want to have a robot, why should it have hands and not tentacles? And the answer to this is, it depends on what the purpose is. Do we want robots that can perform the same functions of humans, or do we want robots which are specialized for doing particular tasks? We will see when we get there.So there you have it—the future of manipulation is 100% sometimes probably tentacles. Continue reading

Posted in Human Robots

#439783 This Google-Funded Project Is Tracking ...

It’s crunch time on climate change. The IPCC’s latest report told the world just how bad it is, and…it’s bad. Companies, NGOs, and governments are scrambling for fixes, both short-term and long-term, from banning sale of combustion-engine vehicles to pouring money into hydrogen to building direct air capture plants. And one initiative, launched last week, is taking an “if you can name it, you can tame it” approach by creating an independent database that measures and tracks emissions all over the world.

Climate TRACE, which stands for tracking real-time atmospheric carbon emissions, is a collaboration between nonprofits, tech companies, and universities, including CarbonPlan, Earthrise Alliance, Johns Hopkins Applied Physics Laboratory, former US Vice President Al Gore, and others. The organization started thanks to a grant from Google, which funded an effort to measure power plant emissions using satellites. A team of fellows from Google helped build algorithms to monitor the power plants (the Google.org Fellowship was created in 2019 to let Google employees do pro bono technical work for grant recipients).

Climate TRACE uses data from satellites and other remote sensing technologies to “see” emissions. Artificial intelligence algorithms combine this data with verifiable emissions measurements to produce estimates of the total emissions coming from various sources.

These sources are divided into ten sectors—like power, manufacturing, transportation, and agriculture—each with multiple subsectors (i.e., two subsectors of agriculture are rice cultivation and manure management). The total carbon emitted January 2015 to December 2020, by the project’s estimation, was 303.96 billion tons. The biggest offender? Electricity generation. It’s no wonder, then, that states, companies, and countries are rushing to make (occasionally unrealistic) carbon-neutral pledges, and that the renewable energy industry is booming.

The founders of the initiative hope that, by increasing transparency, the database will increase accountability, thereby spurring action. Younger consumers care about climate change, and are likely to push companies and brands to do something about it.

The BBC reported that in a recent survey led by the UK’s Bath University, almost 60 percent of respondents said they were “very worried” or “extremely worried” about climate change, while more than 45 percent said feelings about the climate affected their daily lives. The survey received responses from 10,000 people aged 16 to 25, finding that young people are the most concerned with climate change in the global south, while in the northern hemisphere those most worried are in Portugal, which has grappled with severe wildfires. Many of the survey respondents, independent of location, reportedly feel that “humanity is doomed.”

Once this demographic reaches working age, they’ll be able to throw their weight around, and it seems likely they’ll do so in a way that puts the planet and its future at center stage. For all its sanctimoniousness, “naming and shaming” of emitters not doing their part may end up being both necessary and helpful.

Until now, Climate TRACE’s website points out, emissions inventories have been largely self-reported (I mean, what’s even the point?), and they’ve used outdated information and opaque measurement methods. Besides being independent, which is huge in itself, TRACE is using 59 trillion bytes of data from more than 300 satellites, more than 11,100 sensors, and other sources of emissions information.

“We’ve established a shared, open monitoring system capable of detecting essentially all forms of humanity’s greenhouse gas emissions,” said Gavin McCormick, executive director of coalition convening member WattTime. “This is a transformative step forward that puts timely information at the fingertips of all those who seek to drive significant emissions reductions on our path to net zero.”

Given the scale of the project, the parties involved, and how quickly it has all come together—the grant from Google was in May 2019—it seems Climate TRACE is well-positioned to make a difference.

Image Credit: NASA Continue reading

Posted in Human Robots

#439773 How the U.S. Army Is Turning Robots Into ...

This article is part of our special report on AI, “The Great AI Reckoning.”

“I should probably not be standing this close,” I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named
RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to “go clear a path.” It's then up to the robot to make all the decisions necessary to achieve that objective.

The ability to make decisions autonomously is not just what makes robots useful, it's what makes robots
robots. We value robots for their ability to sense what's going on around them, make decisions based on that information, and then take useful actions without our input. In the past, robotic decision making followed highly structured rules—if you sense this, then do that. In structured environments like factories, this works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

RoMan, along with many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networks—a computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a wide variety of semistructured data that had previously been very difficult for computers running rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks with multiple layers of abstraction, this technique is called deep learning.

Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally different from the way humans see the world. It's often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that difference—the “black box” opacity of deep learning—poses a potential problem for robots like RoMan and for the Army Research Lab.

In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

This opacity means that robots that rely on deep learning have to be used carefully. A deep-learning system is good at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do best when their applications are well defined and narrow in scope. “When you have well-structured inputs and outputs, and you can encapsulate your problem in that kind of relationship, I think deep learning does very well,” says
Tom Howard, who directs the University of Rochester's Robotics and Artificial Intelligence Laboratory and has developed natural-language interaction algorithms for RoMan and other ground robots. “The question when programming an intelligent robot is, at what practical size do those deep-learning building blocks exist?” Howard explains that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much more significant when that behavior is manifested through a 170-kilogram two-armed military robot.

After a couple of minutes, RoMan hasn't moved—it's still sitting there, pondering the tree branch, arms poised like a praying mantis. For the last 10 years, the Army Research Lab's Robotics Collaborative Technology Alliance (RCTA) has been working with roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other top research institutions to develop robot autonomy for use in future ground-combat vehicles. RoMan is one part of that process.

The “go clear a path” task that RoMan is slowly thinking through is difficult for a robot because the task is so abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grasp them and what kind of manipulation technique might be best to apply (like pushing, pulling, or lifting), and then make it happen. That's a lot of steps and a lot of unknowns for a robot with a limited understanding of the world.

This limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. “The Army can be called upon to operate basically anywhere in the world. We do not have a mechanism for collecting data in all the different domains in which we might be operating. We may be deployed to some unknown forest on the other side of the world, but we'll be expected to perform just as well as we would in our own backyard,” he says. Most deep-learning systems function reliably only within the domains and environments in which they've been trained. Even if the domain is something like “every drivable road in San Francisco,” the robot will do fine, because that's a data set that has already been collected. But, Stump says, that's not an option for the military. If an Army deep-learning system doesn't perform well, they can't simply solve the problem by collecting more data.

ARL's robots also need to have a broad awareness of what they're doing. “In a standard operations order for a mission, you have goals, constraints, a paragraph on the commander's intent—basically a narrative of the purpose of the mission—which provides contextual info that humans can interpret and gives them the structure for when they need to make decisions and when they need to improvise,” Stump explains. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly, depending on the mission's broader objectives. That's a big ask for even the most advanced robot. “I can't think of a deep-learning approach that can deal with this kind of information,” Stump says.

Robots at the Army Research Lab test autonomous navigation techniques in rough terrain [top, middle] with the goal of being able to keep up with their human teammates. ARL is also developing robots with manipulation capabilities [bottom] that can interact with objects so that humans don't have to.Evan Ackerman

While I watch, RoMan is reset for a second try at branch removal. ARL's approach to autonomy is modular, where deep learning is combined with other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two different ways of identifying objects from 3D sensor data: UPenn's approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional database of 3D models. Perception through search works only if you know exactly which objects you're looking for in advance, but training is much faster since you need only a single model per object. It can also be more accurate when perception of the object is difficult—if the object is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run simultaneously and compete against each other.

Perception is one of the things that deep learning tends to excel at. “The computer vision community has made crazy progress using deep learning for this stuff,” says Maggie Wigness, a computer scientist at ARL. “We've had good success with some of these models that were trained in one environment generalizing to a new environment, and we intend to keep using deep learning for these sorts of tasks, because it's the state of the art.”

ARL's modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be created or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when you're not necessarily sure what optimal behavior looks like. This is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. “When we deploy these robots, things can change very quickly,” Wigness says. “So we wanted a technique where we could have a soldier intervene, and with just a few examples from a user in the field, we can update the system if we need a new behavior.” A deep-learning technique would require “a lot more data and time,” she says.

It's not just data-sparse problems and fast adaptation that deep learning struggles with. There are also questions of robustness, explainability, and safety. “These questions aren't unique to the military,” says Stump, “but it's especially important when we're talking about systems that may incorporate lethality.” To be clear, ARL is not currently working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military more broadly, which means considering ways in which such systems may be used in the future.

The requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.

Safety is an obvious priority, and yet there isn't a clear way of making a deep-learning system verifiably safe, according to Stump. “Doing deep learning with safety constraints is a major research effort. It's hard to add those constraints into the system, because you don't know where the constraints already in the system came from. So when the mission changes, or the context changes, it's hard to deal with that. It's not even a data question; it's an architecture question.” ARL's modular architecture, whether it's a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using different techniques that are more verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. “If other information comes in and changes what we need to do, there's a hierarchy there,” Stump says. “It all happens in a rational way.”

Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as “somewhat of a rabble-rouser” due to his skepticism of some of the claims made about the power of deep learning, agrees with the ARL roboticists that deep-learning approaches often can't handle the kinds of challenges that the Army has to be prepared for. “The Army is always entering new environments, and the adversary is always going to be trying to change the environment so that the training process the robots went through simply won't match what they're seeing,” Roy says. “So the requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that's a problem.”

Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a useful technology when applied to problems with clear functional relationships, but when you start looking at abstract concepts, it's not clear whether deep learning is a viable approach. “I'm very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning,” Roy says. “I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not believe that we understand how to do that yet.” Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It's harder to combine those two networks into one larger network that detects red cars than it would be if you were using a symbolic reasoning system based on structured rules with logical relationships. “Lots of people are working on this, but I haven't seen a real success that drives abstract reasoning of this kind.”

For the foreseeable future, ARL is making sure that its autonomous systems are safe and robust by keeping humans around for both higher-level reasoning and occasional low-level advice. Humans might not be directly in the loop at all times, but the idea is that humans and robots are more effective when working together as a team. When the most recent phase of the Robotics Collaborative Technology Alliance program began in 2009, Stump says, “we'd already had many years of being in Iraq and Afghanistan, where robots were often used as tools. We've been trying to figure out what we can do to transition robots from tools to acting more as teammates within the squad.”

RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn't have any fundamental knowledge about what a tree branch actually is, and this lack of world knowledge (what we think of as common sense) is a fundamental problem with autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan's job much easier. And indeed, this time RoMan manages to successfully grasp the branch and noisily haul it across the room.

Turning a robot into a good teammate can be difficult, because it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in special situations like explosive-ordnance disposal but is otherwise not efficient. Too much autonomy and you'd start to have issues with trust, safety, and explainability.

“I think the level that we're looking for here is for robots to operate on the level of working dogs,” explains Stump. “They understand exactly what we need them to do in limited circumstances, they have a small amount of flexibility and creativity if they are faced with novel circumstances, but we don't expect them to do creative problem-solving. And if they need help, they fall back on us.”

RoMan is not likely to find itself out in the field on a mission anytime soon, even as part of a team with humans. It's very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first in autonomous driving, and later in more complex robotic systems that could include mobile manipulators like RoMan. APPL combines different machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. That allows high-level goals and constraints to be applied on top of lower-level programming. Humans can use teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots adjust to new environments, while the robots can use unsupervised reinforcement learning to adjust their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. With APPL, a learning-based system like RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if it ends up in an environment that's too different from what it trained on.

It's tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, “there are lots of hard problems, but industry's hard problems are different from the Army's hard problems.” The Army doesn't have the luxury of operating its robots in structured environments with lots of data, which is why ARL has put so much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a key part of the autonomous framework that ARL is developing. “That's what we're trying to build with our robotics systems,” Stump says. “That's our bumper sticker: 'From tools to teammates.' ”

This article appears in the October 2021 print issue as “Deep Learning Goes to Boot Camp.”

Special Report: The Great AI Reckoning

READ NEXT:
7 Revealing Ways AIs Fail

Or see the full report for more articles on the future of AI. Continue reading

Posted in Human Robots

#439721 New Study Finds a Single Neuron Is a ...

Comparing brains to computers is a long and dearly held analogy in both neuroscience and computer science.

It’s not hard to see why.

Our brains can perform many of the tasks we want computers to handle with an easy, mysterious grace. So, it goes, understanding the inner workings of our minds can help us build better computers; and those computers can help us better understand our own minds. Also, if brains are like computers, knowing how much computation it takes them to do what they do can help us predict when machines will match minds.

Indeed, there’s already a productive flow of knowledge between the fields.

Deep learning, a powerful form of artificial intelligence, for example, is loosely modeled on the brain’s vast, layered networks of neurons.

You can think of each “node” in a deep neural network as an artificial neuron. Like neurons, nodes receive signals from other nodes connected to them and perform mathematical operations to transform input into output.

Depending on the signals a node receives, it may opt to send its own signal to all the nodes in its network. In this way, signals cascade through layer upon layer of nodes, progressively tuning and sharpening the algorithm.

The brain works like this too. But the keyword above is loosely.

Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex.

In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI.

“[The result] forms a bridge from biological neurons to artificial neurons,” Andreas Tolias, a computational neuroscientist at Baylor College of Medicine, told Quanta last week.

Amazing Brains
Neurons are the cells that make up our brains. There are many different types of neurons, but generally, they have three parts: spindly, branching structures called dendrites, a cell body, and a root-like axon.

On one end, dendrites connect to a network of other neurons at junctures called synapses. At the other end, the axon forms synapses with a different population of neurons. Each cell receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes).

To computationally compare biological and artificial neurons, the team asked: How big of an artificial neural network would it take to simulate the behavior of a single biological neuron?

First, they built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex). The model used some 10,000 differential equations to simulate how and when the neuron would translate a series of input signals into a spike of its own.

They then fed inputs into their simulated neuron, recorded the outputs, and trained deep learning algorithms on all the data. Their goal? Find the algorithm that could most accurately approximate the model.

(Video: A model of a pyramidal neuron (left) receives signals through its dendritic branches. In this case, the signals provoke three spikes.)

They increased the number of layers in the algorithm until it was 99 percent accurate at predicting the simulated neuron’s output given a set of inputs. The sweet spot was at least five layers but no more than eight, or around 1,000 artificial neurons per biological neuron. The deep learning algorithm was much simpler than the original model—but still quite complex.

From where does this complexity arise?

As it turns out, it’s mostly due to a type of chemical receptor in dendrites—the NMDA ion channel—and the branching of dendrites in space. “Take away one of those things, and a neuron turns [into] a simple device,” lead author David Beniaguev tweeted in 2019, describing an earlier version of the work published as a preprint.

Indeed, after removing these features, the team found they could match the simplified biological model with but a single-layer deep learning algorithm.

A Moving Benchmark
It’s tempting to extrapolate the team’s results to estimate the computational complexity of the whole brain. But we’re nowhere near such a measure.

For one, it’s possible the team didn’t find the most efficient algorithm.

It’s common for the the developer community to rapidly improve upon the first version of an advanced deep learning algorithm. Given the intensive iteration in the study, the team is confident in the results, but they also released the model, data, and algorithm to the scientific community to see if anyone could do better.

Also, the model neuron is from a rat’s brain, as opposed to a human’s, and it’s only one type of brain cell. Further, the study is comparing a model to a model—there is, as of yet, no way to make a direct comparison to a physical neuron in the brain. It’s entirely possible the real thing is more, not less, complex.

Still, the team believes their work can push neuroscience and AI forward.

In the former case, the study is further evidence dendrites are complicated critters worthy of more attention. In the latter, it may lead to radical new algorithmic architectures.

Idan Segev, a coauthor on the paper, suggests engineers should try replacing the simple artificial neurons in today’s algorithms with a mini five-layer network simulating a biological neuron. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” Segev said.

Whether so much added complexity would pay off is uncertain. Experts debate how much of the brain’s detail algorithms need to capture to achieve similar or better results.

But it’s hard to argue with millions of years of evolutionary experimentation. So far, following the brain’s blueprint has been a rewarding strategy. And if this work is any indication, future neural networks may well dwarf today’s in size and complexity.

Image Credit: NICHD/S. Jeong Continue reading

Posted in Human Robots