Tag Archives: test

#431851 Bend it like Kengoro and Kenshiro

These Japanese humanoids can replicate flexible human-like movement during physical workouts like push-ups, crunches, stretches and other whole-body exercises, to help researchers better understand how humans move during athletic sports, aid in the development of artificial limbs and whole bodies, … Continue reading

Posted in Human Robots

#432021 Unleashing Some of the Most Ambitious ...

At Singularity University, we are unleashing a generation of women who are smashing through barriers and starting some of the most ambitious technology companies on the planet.

Singularity University was founded in 2008 to empower leaders to use exponential technologies to solve our world’s biggest challenges. Our flagship program, the Global Solutions Program, has historically brought 80 entrepreneurs from around the world to Silicon Valley for 10 weeks to learn about exponential technologies and create moonshot startups that improve the lives of a billion people within a decade.

After nearly 10 years of running this program, we can say that about 70 percent of our successful startups have been founded or co-founded by female entrepreneurs (see below for inspiring examples of their work). This is in sharp contrast to the typical 10–20 percent of venture-backed tech companies that have a female founder, as reported by TechCrunch.

How are we so dramatically changing the game? While 100 percent of the credit goes to these courageous women, as both an alumna of the Global Solutions Program and our current vice chair of Global Grand Challenges, I want to share my reflections on what has worked.

At the most basic level, it is essential to deeply believe in the inherent worth, intellectual genius, and profound entrepreneurial caliber of women. While this may seem obvious, this is not the way our world currently thinks—we live in a world that sees women’s ideas, contributions, work, and existence as inherently less valuable than men’s.

For example, a 2017 Harvard Business Review article noted that even when women engage in the same behaviors and work as men, their work is considered less valuable simply because a woman did the job. An additional 2017 Harvard Business Review article showed that venture capitalists are significantly less likely to invest in female entrepreneurs and are more likely to ask men questions about the potential success of their companies while grilling women about the potential downfalls of their companies.

This doubt and lack of recognition of the genius and caliber of women is also why women are still paid less than men for completing identical work. Further, it’s why women’s work often gets buried in “number two” support roles of men in leadership roles and why women are expected to take on second shifts at home managing tedious household chores in addition to their careers. I would also argue these views as well as the rampant sexual harassment, assault, and violence against women that exists today stems from stubborn, historical, patriarchal views of women as living for the benefit of men, rather than for their own sovereignty and inherent value.

As with any other business, Singularity University has not been immune to these biases but is resolutely focused on helping women achieve intellectual genius and global entrepreneurial caliber by harnessing powerful exponential technologies.

We create an environment where women can physically and intellectually thrive free of harassment to reach their full potential, and we are building a broader ecosystem of alumni and partners around the world who not only support our female entrepreneurs throughout their entrepreneurial journeys, but who are also sparking and leading systemic change in their own countries and communities.

Respecting the Intellectual Genius and Entrepreneurial Caliber of Women
The entrepreneurial legends of our time—Steve Jobs, Elon Musk, Mark Zuckerberg, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin—are men who have all built their empires using exponential technologies. Exponential technologies helped these men succeed faster and with greater impact due to Moore’s Law and the Law of Accelerating Returns which states that any digital technology (such as computing, software, artificial intelligence, robotics, quantum computing, biotechnology, nanotechnology, etc.) will become more sophisticated while dramatically falling in price, enabling rapid scaling.

Knowing this, an entrepreneur can plot her way to an ambitious global solution over time, releasing new applications just as the technology and market are ready. Furthermore, these rapidly advancing technologies often converge to create new tools and opportunities for innovators to come up with novel solutions to challenges that were previously impossible to solve in the past.

For various reasons, women have not pursued exponential technologies as aggressively as men (or were prevented or discouraged from doing so).

While more women are founding firms at a higher rate than ever in wealthy countries like the United States, the majority are small businesses in linear industries that have been around for hundreds of years, such as social assistance, health, education, administrative, or consulting services. In lower-income countries, international aid agencies and nonprofits often encourage women to pursue careers in traditional handicrafts, micro-enterprise, and micro-finance. While these jobs have historically helped women escape poverty and gain financial independence, they have done little to help women realize the enormous power, influence, wealth, and ability to transform the world for the better that comes from building companies, nonprofits, and solutions grounded in exponential technologies.

We need women to be working with exponential technologies today in order to be powerful leaders in the future.

Participants who enroll in our Global Solutions Program spend the first few weeks of the program learning about exponential technologies from the world’s experts and the final weeks launching new companies or nonprofits in their area of interest. We require that women (as well as men) utilize exponential technologies as a condition of the program.

In this sense, at Singularity University women start their endeavors with all of us believing and behaving in a way that assumes they can achieve global impact at the level of our world’s most legendary entrepreneurs.

Creating an Environment Where Woman Can Thrive
While challenging women to embrace exponential technologies is essential, it is also important to create an environment where women can thrive. In particular, this means ensuring women feel at home on our campus by ensuring gender diversity, aggressively addressing sexual harassment, and flipping the traditional culture from one that penalizes women, to one that values and supports them.

While women were initially only a small minority of our Global Solutions Program, in 2014, we achieved around 50% female attendance—a statistic that has since held over the years.

This is not due to a quota—every year we turn away extremely qualified women from our program (and are working on reformulating the program to allow more people to participate in the future.) While part of our recruiting success is due to the efforts of our marketing team, we also benefited from the efforts of some of our early female founders, staff, faculty, and alumnae including Susan Fonseca, Emeline Paat-Dahlstrom, Kathryn Myronuk, Lajuanda Asemota, Chiara Giovenzana, and Barbara Silva Tronseca.

As early champions of Singularity University these women not only launched diversity initiatives and personally reached out to women, but were crucial role models holding leadership roles in our community. In addition, Fonseca and Silva also both created multiple organizations and initiatives outside of (or in conjunction with) the university that produced additional pipelines of female candidates. In particular, Fonseca founded Women@TheFrontier as well as other organizations focusing on women, technology and innovation, and Silva founded BestInnovation (a woman’s accelerator in Latin America), as well as led Singularity University’s Chilean Chapter and founded the first SingularityU Summit in Latin America.

These women’s efforts in globally scaling Singularity University have been critical in ensuring woman around the world now see Singularity University as a place where they can lead and shape the future.

Also, thanks to Google (Alphabet) and many of our alumni and partners, we were able to provide full scholarships to any woman (or man) to attend our program regardless of their economic status. Google committed significant funding for full scholarships while our partners around the world also hosted numerous Global Impact Competitions, where entrepreneurs pitched their solutions to their local communities with the winners earning a full scholarship funded by our partners to attend the Global Solution Program as their prize.

Google and our partners’ support helped individuals attend our program and created a wider buzz around exponential technology and social change around the world in local communities. It led to the founding of 110 SU chapters in 55 countries.

Another vital aspect of our work in supporting women has been trying to create a harassment-free environment. Throughout the Silicon Valley, more than 60% of women convey that while they are trying to build their companies or get their work done, they are also dealing with physical and sexual harassment while being demeaned and excluded in other ways in the workplace. We have taken actions to educate and train our staff on how to deal with situations should they occur. All staff receives training on harassment when they join Singularity University, and all Global Solutions Program participants attend mandatory trainings on sexual harassment when they first arrive on campus. We also have male and female wellness counselors available that can offer support to both individuals and teams of entrepreneurs throughout the entire program.

While at a minimum our campus must be physically safe for women, we also strive to create a culture that values women and supports them in the additional challenges and expectations they face. For example, one of our 2016 female participants, Van Duesterberg, was pregnant during the program and said that instead of having people doubt her commitment to her startup or make her prove she could handle having a child and running a start-up at the same time, people went out of their way to help her.

“I was the epitome of a person not supposed to be doing a startup,” she said. “I was pregnant and would need to take care of my child. But Singularity University was supportive and encouraging. They made me feel super-included and that it was possible to do both. I continue to come back to campus even though the program is over because the network welcomes me and supports me rather than shuts me out because of my physical limitations. Rather than making me feel I had to prove myself, everyone just understood me and supported me, whether it was bringing me healthy food or recommending funders.”

Another strength that we have in supporting women is that after the Global Solutions Program, entrepreneurs have access to a much larger ecosystem.

Many entrepreneurs partake in SU Ventures, which can provide further support to startups as they develop, and we now have a larger community of over 200,000 people in almost every country. These members have often attended other Singularity University programs, events and are committed to our vision of the future. These women and men consist of business executives, Fortune 500 companies, investors, nonprofit and government leaders, technologists, members of the media, and other movers and shakers in the world. They have made introductions for our founders, collaborated with them on business ventures, invested in them and showcased their work at high profile events around the world.

Building for the Future
While our Global Solutions Program is making great strides in supporting female entrepreneurs, there is always more work to do. We are now focused on achieving the same degree of female participation across all of our programs and actively working to recruit and feature more female faculty and speakers on stage. As our community grows and scales around the world, we are also intent at how to best uphold our values and policies around sexual harassment across diverse locations and cultures. And like all businesses everywhere, we are focused on recruiting more women to serve at senior leadership levels within SU. As we make our way forward, we hope that you will join us in boldly leading this change and recognizing the genius and power of female entrepreneurs.

Meet Some of Our Female Moonshots
While we have many remarkable female entrepreneurs in the Singularity University community, the list below features a few of the women who have founded or co-founded companies at the Global Solutions Program that have launched new industries and are on their way to changing the way our world works for millions if not billions of people.

Jessica Scorpio co-founded Getaround in 2009. Getaround was one of the first car-sharing service platforms allowing anyone to rent out their car using a smartphone app. GetAround was a revolutionary idea in 2009, not only because smartphones and apps were still in their infancy, but because it was unthinkable that a technology startup could disrupt the major entrenched car, transport, and logistics companies. Scorpio’s early insights and pioneering entrepreneurial work brought to life new ways that humans relate to car sharing and the future self-driving car industry. Scorpio and Getaround have won numerous awards, and Getaround now serves over 200,000 members.

Paola Santana co-founded Matternet in 2011, which pioneered the commercial drone transport industry. In 2011, only military, hobbyists or the film industry used drones. Matternet demonstrated that drones could be used for commercial transport in short point-to-point deliveries for high-value goods laying the groundwork for drone transport around the world as well as some of the early thinking behind the future flying car industry. Santana was also instrumental in shaping regulations for the use of commercial drones around the world, making the industry possible.

Sara Naseri co-founded Qurasense in 2014, a life sciences start-up that analyzes women’s health through menstrual blood allowing women to track their health every month. Naseri is shifting our understanding of women’s menstrual blood as a waste product and something “not to be talked about,” to a rich, non-invasive, abundant source of information about women’s health.

Abi Ramanan co-founded ImpactVision in 2015, a software company that rapidly analyzes the quality and characteristics of food through hyperspectral images. Her long-term vision is to digitize food supply chains to reduce waste and fraud, given that one-third of all food is currently wasted before it reaches our plates. Ramanan is also helping the world understand that hyperspectral technology can be used in many industries to help us “see the unseen” and augment our ability to sense and understand what is happening around us in a much more sophisticated way.

Anita Schjøll Brede and Maria Ritola co-founded Iris AI in 2015, an artificial intelligence company that is building an AI research assistant that drastically improves the efficiency of R&D research and breaks down silos between different industries. Their long-term vision is for Iris AI to become smart enough that she will become a scientist herself. Fast Company named Iris AI one of the 10 most innovative artificial intelligence companies for 2017.

Hla Hla Win co-founded 360ed in 2016, a startup that conducts teacher training and student education through virtual reality and augmented reality in Myanmar. They have already connected teachers from 128 private schools in Myanmar with schools teaching 21st-century skills in Silicon Valley and around the world. Their moonshot is to build a platform where any teacher in the world can share best practices in teachers’ training. As they succeed, millions of children in some of the poorest parts of the world will have access to a 21st-century education.

Min FitzGerald and Van Duesterberg cofounded Nutrigene in 2017, a startup that ships freshly formulated, tailor-made supplement elixirs directly to consumers. Their long-term vision is to help people optimize their health using actionable data insights, so people can take a guided, tailored approaching to thriving into longevity.

Anna Skaya co-founded Basepaws in 2016, which created the first genetic test for cats and is building a community of citizen scientist pet owners. They are creating personalized pet products such as supplements, therapeutics, treats, and toys while also developing a database of genetic data for future research that will help both humans and pets over the long term.

Olivia Ramos co-founded Deep Blocks in 2016, a startup using artificial intelligence to integrate and streamline the processes of architecture, pre-construction, and real estate. As digital technologies, artificial intelligence, and robotics advance, it no longer makes sense for these industries to exist separately. Ramos recognized the tremendous value and efficiency that it is now possible to unlock with exponential technologies and creating an integrated industry in the future.

Please also visit our website to learn more about other female entrepreneurs, staff and faculty who are pioneering the future through exponential technologies. Continue reading

Posted in Human Robots

#431928 How Fast Is AI Progressing? Stanford’s ...

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading

Posted in Human Robots

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#431836 Do Our Brains Use Deep Learning to Make ...

The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
Biological No-Go
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
Branching Networks
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots