Tag Archives: area
#434303 Making Superhumans Through Radical ...
Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.
Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.
These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.
Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.
Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.
If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.
Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.
Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.
Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.
Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?
Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.
The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.
Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.
By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.
Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.
Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.
These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.
Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.
This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.
Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.
Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.
The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.
When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.
Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.
The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.
Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.
Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.
Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.
This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.
Image Credit: jamesteohart / Shutterstock.com Continue reading
#434260 The Most Surprising Tech Breakthroughs ...
Development across the entire information technology landscape certainly didn’t slow down this year. From CRISPR babies, to the rapid decline of the crypto markets, to a new robot on Mars, and discovery of subatomic particles that could change modern physics as we know it, there was no shortage of headline-grabbing breakthroughs and discoveries.
As 2018 comes to a close, we can pause and reflect on some of the biggest technology breakthroughs and scientific discoveries that occurred this year.
I reached out to a few Singularity University speakers and faculty across the various technology domains we cover asking what they thought the biggest breakthrough was in their area of expertise. The question posed was:
“What, in your opinion, was the biggest development in your area of focus this year? Or, what was the breakthrough you were most surprised by in 2018?”
I can share that for me, hands down, the most surprising development I came across in 2018 was learning that a publicly-traded company that was briefly valued at over $1 billion, and has over 12,000 employees and contractors spread around the world, has no physical office space and the entire business is run and operated from inside an online virtual world. This is Ready Player One stuff happening now.
For the rest, here’s what our experts had to say.
DIGITAL BIOLOGY
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University
“That’s easy: CRISPR babies. I knew it was technically possible, and I’ve spent two years predicting it would happen first in China. I knew it was just a matter of time but I failed to predict the lack of oversight, the dubious consent process, the paucity of publicly-available data, and the targeting of a disease that we already know how to prevent and treat and that the children were at low risk of anyway.
I’m not convinced that this counts as a technical breakthrough, since one of the girls probably isn’t immune to HIV, but it sure was a surprise.”
For more, read Dr. Vora’s summary of this recent stunning news from China regarding CRISPR-editing human embryos.
QUANTUM COMPUTING
Andrew Fursman | Co-Founder/CEO 1Qbit, Faculty, Quantum Computing, Singularity University
“There were two last-minute holiday season surprise quantum computing funding and technology breakthroughs:
First, right before the government shutdown, one priority legislative accomplishment will provide $1.2 billion in quantum computing research over the next five years. Second, there’s the rise of ions as a truly viable, scalable quantum computing architecture.”
*Read this Gizmodo profile on an exciting startup in the space to learn more about this type of quantum computing
ENERGY
Ramez Naam | Chair, Energy and Environmental Systems, Singularity University
“2018 had plenty of energy surprises. In solar, we saw unsubsidized prices in the sunny parts of the world at just over two cents per kwh, or less than half the price of new coal or gas electricity. In the US southwest and Texas, new solar is also now cheaper than new coal or gas. But even more shockingly, in Germany, which is one of the least sunny countries on earth (it gets less sunlight than Canada) the average bid for new solar in a 2018 auction was less than 5 US cents per kwh. That’s as cheap as new natural gas in the US, and far cheaper than coal, gas, or any other new electricity source in most of Europe.
In fact, it’s now cheaper in some parts of the world to build new solar or wind than to run existing coal plants. Think tank Carbon Tracker calculates that, over the next 10 years, it will become cheaper to build new wind or solar than to operate coal power in most of the world, including specifically the US, most of Europe, and—most importantly—India and the world’s dominant burner of coal, China.
Here comes the sun.”
GLOBAL GRAND CHALLENGES
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University
“In 2018 we saw a lot of areas in the Global Grand Challenges move forward—advancements in robotic farming technology and cultured meat, low-cost 3D printed housing, more sophisticated types of online education expanding to every corner of the world, and governments creating new policies to deal with the ethics of the digital world. These were the areas we were watching and had predicted there would be change.
What most surprised me was to see young people, especially teenagers, start to harness technology in powerful ways and use it as a platform to make their voices heard and drive meaningful change in the world. In 2018 we saw teenagers speak out on a number of issues related to their well-being and launch digital movements around issues such as gun and school safety, global warming and environmental issues. We often talk about the harm technology can cause to young people, but on the flip side, it can be a very powerful tool for youth to start changing the world today and something I hope we see more of in the future.”
BUSINESS STRATEGY
Pascal Finette | Chair, Entrepreneurship and Open Innovation, Singularity University
“Without a doubt the rapid and massive adoption of AI, specifically deep learning, across industries, sectors, and organizations. What was a curiosity for most companies at the beginning of the year has quickly made its way into the boardroom and leadership meetings, and all the way down into the innovation and IT department’s agenda. You are hard-pressed to find a mid- to large-sized company today that is not experimenting or implementing AI in various aspects of its business.
On the slightly snarkier side of answering this question: The very rapid decline in interest in blockchain (and cryptocurrencies). The blockchain party was short, ferocious, and ended earlier than most would have anticipated, with a huge hangover for some. The good news—with the hot air dissipated, we can now focus on exploring the unique use cases where blockchain does indeed offer real advantages over centralized approaches.”
*Author note: snark is welcome and appreciated
ROBOTICS
Hod Lipson | Director, Creative Machines Lab, Columbia University
“The biggest surprise for me this year in robotics was learning dexterity. For decades, roboticists have been trying to understand and imitate dexterous manipulation. We humans seem to be able to manipulate objects with our fingers with incredible ease—imagine sifting through a bunch of keys in the dark, or tossing and catching a cube. And while there has been much progress in machine perception, dexterous manipulation remained elusive.
There seemed to be something almost magical in how we humans can physically manipulate the physical world around us. Decades of research in grasping and manipulation, and millions of dollars spent on robot-hand hardware development, has brought us little progress. But in late 2018, the Berkley OpenAI group demonstrated that this hurdle may finally succumb to machine learning as well. Given 200 years worth of practice, machines learned to manipulate a physical object with amazing fluidity. This might be the beginning of a new age for dexterous robotics.”
MACHINE LEARNING
Jeremy Howard | Founding Researcher, fast.ai, Founder/CEO, Enlitic, Faculty Data Science, Singularity University
“The biggest development in machine learning this year has been the development of effective natural language processing (NLP).
The New York Times published an article last month titled “Finally, a Machine That Can Finish Your Sentence,” which argued that NLP neural networks have reached a significant milestone in capability and speed of development. The “finishing your sentence” capability mentioned in the title refers to a type of neural network called a “language model,” which is literally a model that learns how to finish your sentences.
Earlier this year, two systems (one, called ELMO, is from the Allen Institute for AI, and the other, called ULMFiT, was developed by me and Sebastian Ruder) showed that such a model could be fine-tuned to dramatically improve the state-of-the-art in nearly every NLP task that researchers study. This work was further developed by OpenAI, which in turn was greatly scaled up by Google Brain, who created a system called BERT which reached human-level performance on some of NLP’s toughest challenges.
Over the next year, expect to see fine-tuned language models used for everything from understanding medical texts to building disruptive social media troll armies.”
DIGITAL MANUFACTURING
Andre Wegner | Founder/CEO Authentise, Chair, Digital Manufacturing, Singularity University
“Most surprising to me was the extent and speed at which the industry finally opened up.
While previously, only few 3D printing suppliers had APIs and knew what to do with them, 2018 saw nearly every OEM (or original equipment manufacturer) enabling data access and, even more surprisingly, shying away from proprietary standards and adopting MTConnect, as stalwarts such as 3D Systems and Stratasys have been. This means that in two to three years, data access to machines will be easy, commonplace, and free. The value will be in what is being done with that data.
Another example of this openness are the seemingly endless announcements of integrated workflows: GE’s announcement with most major software players to enable integrated solutions, EOS’s announcement with Siemens, and many more. It’s clear that all actors in the additive ecosystem have taken a step forward in terms of openness. The result is a faster pace of innovation, particularly in the software and data domains that are crucial to enabling comprehensive digital workflow to drive agile and resilient manufacturing.
I’m more optimistic we’ll achieve that now than I was at the end of 2017.”
SCIENCE AND DISCOVERY
Paul Saffo | Chair, Future Studies, Singularity University, Distinguished Visiting Scholar, Stanford Media-X Research Network
“The most important development in technology this year isn’t a technology, but rather the astonishing science surprises made possible by recent technology innovations. My short list includes the discovery of the “neptmoon”, a Neptune-scale moon circling a Jupiter-scale planet 8,000 lightyears from us; the successful deployment of the Mars InSight Lander a month ago; and the tantalizing ANITA detection (what could be a new subatomic particle which would in turn blow the standard model wide open). The highest use of invention is to support science discovery, because those discoveries in turn lead us to the future innovations that will improve the state of the world—and fire up our imaginations.”
ROBOTICS
Pablos Holman | Inventor, Hacker, Faculty, Singularity University
“Just five or ten years ago, if you’d asked any of us technologists “What is harder for robots? Eyes, or fingers?” We’d have all said eyes. Robots have extraordinary eyes now, but even in a surgical robot, the fingers are numb and don’t feel anything. Stanford robotics researchers have invented fingertips that can feel, and this will be a kingpin that allows robots to go everywhere they haven’t been yet.”
BLOCKCHAIN
Nathana Sharma | Blockchain, Policy, Law, and Ethics, Faculty, Singularity University
“2017 was the year of peak blockchain hype. 2018 has been a year of resetting expectations and technological development, even as the broader cryptocurrency markets have faced a winter. It’s now about seeing adoption and applications that people want and need to use rise. An incredible piece of news from December 2018 is that Facebook is developing a cryptocurrency for users to make payments through Whatsapp. That’s surprisingly fast mainstream adoption of this new technology, and indicates how powerful it is.”
ARTIFICIAL INTELLIGENCE
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University
“I think one of the most visible improvements in AI was illustrated by the Boston Dynamics Parkour video. This was not due to an improvement in brushless motors, accelerometers, or gears. It was due to improvements in AI algorithms and training data. To be fair, the video released was cherry-picked from numerous attempts, many of which ended with a crash. However, the fact that it could be accomplished at all in 2018 was a real win for both AI and robotics.”
NEUROSCIENCE
Divya Chander | Chair, Neuroscience, Singularity University
“2018 ushered in a new era of exponential trends in non-invasive brain modulation. Changing behavior or restoring function takes on a new meaning when invasive interfaces are no longer needed to manipulate neural circuitry. The end of 2018 saw two amazing announcements: the ability to grow neural organoids (mini-brains) in a dish from neural stem cells that started expressing electrical activity, mimicking the brain function of premature babies, and the first (known) application of CRISPR to genetically alter two fetuses grown through IVF. Although this was ostensibly to provide genetic resilience against HIV infections, imagine what would happen if we started tinkering with neural circuitry and intelligence.”
Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading
#434235 The Milestones of Human Progress We ...
When you look back at 2018, do you see a good or a bad year? Chances are, your perception of the year involves fixating on all the global and personal challenges it brought. In fact, every year, we tend to look back at the previous year as “one of the most difficult” and hope that the following year is more exciting and fruitful.
But in the grander context of human history, 2018 was an extraordinarily positive year. In fact, every year has been getting progressively better.
Before we dive into some of the highlights of human progress from 2018, let’s make one thing clear. There is no doubt that there are many overwhelming global challenges facing our species. From climate change to growing wealth inequality, we are far from living in a utopia.
Yet it’s important to recognize that both our news outlets and audiences have been disproportionately fixated on negative news. This emphasis on bad news is detrimental to our sense of empowerment as a species.
So let’s take a break from all the disproportionate negativity and have a look back on how humanity pushed boundaries in 2018.
On Track to Becoming an Interplanetary Species
We often forget how far we’ve come since the very first humans left the African savanna, populated the entire planet, and developed powerful technological capabilities. Our desire to explore the unknown has shaped the course of human evolution and will continue to do so.
This year, we continued to push the boundaries of space exploration. As depicted in the enchanting short film Wanderers, humanity’s destiny is the stars. We are born to be wanderers of the cosmos and the everlasting unknown.
SpaceX had 21 successful launches in 2018 and closed the year with a successful GPS launch. The latest test flight by Virgin Galactic was also an incredible milestone, as SpaceShipTwo was welcomed into space. Richard Branson and his team expect that space tourism will be a reality within the next 18 months.
Our understanding of the cosmos is also moving forward with continuous breakthroughs in astrophysics and astronomy. One notable example is the MARS InSight Mission, which uses cutting-edge instruments to study Mars’ interior structure and has even given us the first recordings of sound on Mars.
Understanding and Tackling Disease
Thanks to advancements in science and medicine, we are currently living longer, healthier, and wealthier lives than at any other point in human history. In fact, for most of human history, life expectancy at birth was around 30. Today it is more than 70 worldwide, and in the developed parts of the world, more than 80.
Brilliant researchers around the world are pushing for even better health outcomes. This year, we saw promising treatments emerge against Alzheimers disease, rheumatoid arthritis, multiple scleroris, and even the flu.
The deadliest disease of them all, cancer, is also being tackled. According to the American Association of Cancer Research, 22 revolutionary treatments for cancer were approved in the last year, and the death rate in adults is also in decline. Advancements in immunotherapy, genetic engineering, stem cells, and nanotechnology are all powerful resources to tackle killer diseases.
Breakthrough Mental Health Therapy
While cleaner energy, access to education, and higher employment rates can improve quality of life, they do not guarantee happiness and inner peace. According to the World Economic Forum, mental health disorders affect one in four people globally, and in many places they are significantly under-reported. More people are beginning to realize that our mental health is just as important as our physical health, and that we ought to take care of our minds just as much as our bodies.
We are seeing the rise of applications that put mental well-being at their center. Breakthrough advancements in genetics are allowing us to better understand the genetic makeup of disorders like clinical depression or Schizophrenia, and paving the way for personalized medical treatment. We are also seeing the rise of increasingly effective therapeutic treatments for anxiety.
This year saw many milestones for a whole new revolutionary area in mental health: psychedelic therapy. Earlier this summer, the FDA granted breakthrough therapy designation to MDMA for the treatment of PTSD, after several phases of successful trails. Similar research has discovered that Psilocybin (also known as magic mushrooms) combined with therapy is far more effective than traditional forms of treatment for depression and anxiety.
Moral and Social Progress
Innovation is often associated with economic and technological progress. However, we also need leaps of progress in our morality, values, and policies. Throughout the 21st century, we’ve made massive strides in rights for women and children, civil rights, LGBT rights, animal rights, and beyond. However, with rising nationalism and xenophobia in many parts of the developed world, there is significant work to be done on this front.
All hope is not lost, as we saw many noteworthy milestones this year. In January 2018, Iceland introduced the equal wage law, bringing an end to the gender wage gap. On September 6th, the Indian Supreme Court decriminalized homosexuality, marking a historical moment. Earlier in December, the European Commission released a draft of ethics guidelines for trustworthy artificial intelligence. Such are just a few examples of positive progress in social justice, ethics, and policy.
We are also seeing a global rise in social impact entrepreneurship. Emerging startups are no longer valued simply based on their profits and revenue, but also on the level of positive impact they are having on the world at large. The world’s leading innovators are not asking themselves “How can I become rich?” but rather “How can I solve this global challenge?”
Intelligently Optimistic for 2019
It’s becoming more and more clear that we are living in the most exciting time in human history. Even more, we mustn’t be afraid to be optimistic about 2019.
An optimistic mindset can be grounded in rationality and evidence. Intelligent optimism is all about being excited about the future in an informed and rational way. The mindset is critical if we are to get everyone excited about the future by highlighting the rapid progress we have made and recognizing the tremendous potential humans have to find solutions to our problems.
In his latest TED talk, Steven Pinker points out, “Progress does not mean that everything becomes better for everyone everywhere all the time. That would be a miracle, and progress is not a miracle but problem-solving. Problems are inevitable and solutions create new problems which have to be solved in their turn.”
Let us not forget that in cosmic time scales, our entire species’ lifetime, including all of human history, is the equivalent of the blink of an eye. The probability of us existing both as an intelligent species and as individuals is so astoundingly low that it’s practically non-existent. We are the products of 14 billion years of cosmic evolution and extraordinarily good fortune. Let’s recognize and leverage this wondrous opportunity, and pave an exciting way forward.
Image Credit: Virgin Galactic / Virgin Galactic 2018. Continue reading
#433901 The SpiNNaker Supercomputer, Modeled ...
We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.
The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.
That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.
The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.
In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.
The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.
By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.
Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.
Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.
The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.
And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.
That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.
This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.
But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.
At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.
For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.
Image Credit: Adrian Grosu / Shutterstock.com Continue reading