Tag Archives: virtual

#431925 How the Science of Decision-Making Will ...

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.
As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.
Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.
We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?
Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.
Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.
At Worldview, we create learning experiences that are an amalgamation of all of those things.
LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?
BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.
You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.
There’s no single set of answers. There are as many unanswered questions as there are answered questions.
LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?
BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”
As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?
We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.
And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.
For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.
Let’s take hiring for a moment.
How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.
LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?
BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.
In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.
For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.
Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.
I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.
Image Credit: Black Salmon / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431899 Darker Still: Black Mirror’s New ...

The key difference between science fiction and fantasy is that science fiction is entirely possible because of its grounding in scientific facts, while fantasy is not. This is where Black Mirror is both an entertaining and terrifying work of science fiction. Created by Charlie Brooker, the anthological series tells cautionary tales of emerging technology that could one day be an integral part of our everyday lives.
While watching the often alarming episodes, one can’t help but recognize the eerie similarities to some of the tech tools that are already abundant in our lives today. In fact, many previous Black Mirror predictions are already becoming reality.
The latest season of Black Mirror was arguably darker than ever. This time, Brooker seemed to focus on the ethical implications of one particular area: neurotechnology.
Emerging Neurotechnology
Warning: The remainder of this article may contain spoilers from Season 4 of Black Mirror.
Most of the storylines from season four revolve around neurotechnology and brain-machine interfaces. They are based in a world where people have the power to upload their consciousness onto machines, have fully immersive experiences in virtual reality, merge their minds with other minds, record others’ memories, and even track what others are thinking, feeling, and doing.
How can all this ever be possible? Well, these capabilities are already being developed by pioneers and researchers globally. Early last year, Elon Musk unveiled Neuralink, a company whose goal is to merge the human mind with AI through a neural lace. We’ve already connected two brains via the internet, allowing one brain to communicate with another. Various research teams have been able to develop mechanisms for “reading minds” or reconstructing memories of individuals via devices. The list goes on.
With many of the technologies we see in Black Mirror it’s not a question of if, but when. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to upload our consciousness onto the cloud via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” While other experts continue to challenge Kurzweil on the exact year we’ll accomplish this feat, with the current exponential growth of our technological capabilities, we’re on track to get there eventually.
Ethical Questions
As always, technology is only half the conversation. Equally fascinating are the many ethical and moral questions this topic raises.
For instance, with the increasing convergence of artificial intelligence and virtual reality, we have to ask ourselves if our morality from the physical world transfers equally into the virtual world. The first episode of season four, USS Calister, tells the story of a VR pioneer, Robert Daley, who creates breakthrough AI and VR to satisfy his personal frustrations and sexual urges. He uses the DNA of his coworkers (and their children) to re-create them digitally in his virtual world, to which he escapes to torture them, while they continue to be indifferent in the “real” world.
Audiences are left asking themselves: should what happens in the digital world be considered any less “real” than the physical world? How do we know if the individuals in the virtual world (who are ultimately based on algorithms) have true feelings or sentiments? Have they been developed to exhibit characteristics associated with suffering, or can they really feel suffering? Fascinatingly, these questions point to the hard problem of consciousness—the question of if, why, and how a given physical process generates the specific experience it does—which remains a major mystery in neuroscience.
Towards the end of USS Calister, the hostages of Daley’s virtual world attempt to escape through suicide, by committing an act that will delete the code that allows them to exist. This raises yet another mind-boggling ethical question: if we “delete” code that signifies a digital being, should that be considered murder (or suicide, in this case)? Why shouldn’t it? When we murder someone we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live in the same why? Does AI, as code, even have rights?
Brain implants can also have a radical impact on our self-identity and how we define the word “I”. In the episode Black Museum, instead of witnessing just one horror, we get a series of scares in little segments. One of those segments tells the story of a father who attempts to reincarnate the mother of his child by uploading her consciousness into his mind and allowing her to live in his head (essentially giving him multiple personality disorder). In this way, she can experience special moments with their son.
With “no privacy for him, and no agency for her” the good intention slowly goes very wrong. This story raises a critical question: should we be allowed to upload consciousness into limited bodies? Even more, if we are to upload our minds into “the cloud,” at what point do we lose our individuality to become one collective being?
These questions can form the basis of hours of debate, but we’re just getting started. There are no right or wrong answers with many of these moral dilemmas, but we need to start having such discussions.
The Downside of Dystopian Sci-Fi
Like last season’s San Junipero, one episode of the series, Hang the DJ, had an uplifting ending. Yet the overwhelming majority of the stories in Black Mirror continue to focus on the darkest side of human nature, feeding into the pre-existing paranoia of the general public. There is certainly some value in this; it’s important to be aware of the dangers of technology. After all, what better way to explore these dangers before they occur than through speculative fiction?
A big takeaway from every tale told in the series is that the greatest threat to humanity does not come from technology, but from ourselves. Technology itself is not inherently good or evil; it all comes down to how we choose to use it as a society. So for those of you who are techno-paranoid, beware, for it’s not the technology you should fear, but the humans who get their hands on it.
While we can paint negative visions for the future, though, it is also important to paint positive ones. The kind of visions we set for ourselves have the power to inspire and motivate generations. Many people are inherently pessimistic when thinking about the future, and that pessimism in turn can shape their contributions to humanity.
While utopia may not exist, the future of our species could and should be one of solving global challenges, abundance, prosperity, liberation, and cosmic transcendence. Now that would be a thrilling episode to watch.
Image Credit: Billion Photos / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431866 The Technologies We’ll Have Our Eyes ...

It’s that time of year again when our team has a little fun and throws on our futurist glasses to look ahead at some of the technologies and trends we’re most anticipating next year.
Whether the implications of a technology are vast or it resonates with one of us personally, here’s the list from some of the Singularity Hub team of what we have our eyes on as we enter the new year.
For a little refresher, these were the technologies our team was fired up about at the start of 2017.
Tweet us the technology you’re excited to watch in 2018 at @SingularityHub.
Cryptocurrency and Blockchain
“Given all the noise Bitcoin is making globally in the media, it is driving droves of main street investors to dabble in and learn more about cryptocurrencies. This will continue to raise valuations and drive adoption of blockchain. From Bank of America recently getting a blockchain-based patent approved to the Australian Securities Exchange’s plan to use blockchain, next year is going to be chock-full of these stories. Coindesk even recently spotted a patent filing from Apple involving blockchain. From ‘China’s Ethereum’, NEO, to IOTA to Golem to Qtum, there are a lot of interesting cryptos to follow given the immense numbers of potential applications. Hang on, it’s going to be a bumpy ride in 2018!”
–Kirk Nankivell, Website Manager
There Is No One Technology to Watch
“Next year may be remembered for advances in gene editing, blockchain, AI—or most likely all these and more. There is no single technology to watch. A number of consequential trends are advancing and converging. This general pace of change is exciting, and it also contributes to spiking anxiety. Technology’s invisible lines of force are extending further and faster into our lives and subtly subverting how we view the world and each other in unanticipated ways. Still, all the near-term messiness and volatility, the little and not-so-little dramas, the hype and disillusion, the controversies and conflict, all that smooths out a bit when you take a deep breath and a step back, and it’s my sincere hope and belief the net result will be more beneficial than harmful.”
–Jason Dorrier, Managing Editor
‘Fake News’ Fighting Technology
“It’s been a wild ride for the media this year with the term ‘fake news’ moving from the public’s peripheral and into mainstream vocabulary. The spread of ‘fake news’ is often blamed on media outlets, but social media platforms and search engines are often responsible too. (Facebook still won’t identify as a media company—maybe next year?) Yes, technology can contribute to spreading false information, but it can also help stop it. From technologists who are building in-article ‘trust indicator’ features, to artificial intelligence systems that can both spot and shut down fake news early on, I’m hopeful we can create new solutions to this huge problem. One step further: if publishers step up to fix this we might see some faith restored in the media.”
–Alison E. Berman, Digital Producer
Pay-as-You-Go Home Solar Power
“People in rural African communities are increasingly bypassing electrical grids (which aren’t even an option in many cases) and installing pay-as-you-go solar panels on their homes. The companies offering these services are currently not subject to any regulations, though they’re essentially acting as a utility. As demand for power grows, they’ll have to come up with ways to efficiently scale, and to balance the humanitarian and capitalistic aspects of their work. It’s fascinating to think traditional grids may never be necessary in many areas of the continent thanks to this technology.”
–Vanessa Bates Ramirez, Associate Editor
Virtual Personal Assistants
“AI is clearly going to rule our lives, and in many ways it already makes us look like clumsy apes. Alexa, Siri, and Google Assistant are promising first steps toward a world of computers that understand us and relate to us on an emotional level. I crave the day when my Apple Watch coaches me into healthier habits, lets me know about new concerts nearby, speaks to my self-driving Lyft on my behalf, and can help me respond effectively to aggravating emails based on communication patterns. But let’s not brush aside privacy concerns and the implications of handing over our personal data to megacorporations. The scariest thing here is that privacy laws and advertising ethics do not accommodate this level of intrusive data hoarding.”
–Matthew Straub, Director of Digital Engagement (Hub social media)
Solve for Learning: Educational Apps for Children in Conflict Zones
“I am most excited by exponential technology when it is used to help solve a global grand challenge. Educational apps are currently being developed to help solve for learning by increasing accessibility to learning opportunities for children living in conflict zones. Many children in these areas are not receiving an education, with girls being 2.5 times more likely than boys to be out of school. The EduApp4Syria project is developing apps to help children in Syria and Kashmir learn in their native languages. Mobile phones are increasingly available in these areas, and the apps are available offline for children who do not have consistent access to mobile networks. The apps are low-cost, easily accessible, and scalable educational opportunities.
–Paige Wilcoxson, Director, Curriculum & Learning Design
Image Credit: Triff / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431836 Do Our Brains Use Deep Learning to Make ...

The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment