Tag Archives: net
#435474 Watch China’s New Hybrid AI Chip Power ...
When I lived in Beijing back in the 90s, a man walking his bike was nothing to look at. But today, I did a serious double-take at a video of a bike walking his man.
No kidding.
The bike itself looks overloaded but otherwise completely normal. Underneath its simplicity, however, is a hybrid computer chip that combines brain-inspired circuits with machine learning processes into a computing behemoth. Thanks to its smart chip, the bike self-balances as it gingerly rolls down a paved track before smoothly gaining speed into a jogging pace while navigating dexterously around obstacles. It can even respond to simple voice commands such as “speed up,” “left,” or “straight.”
Far from a circus trick, the bike is a real-world demo of the AI community’s latest attempt at fashioning specialized hardware to keep up with the challenges of machine learning algorithms. The Tianjic (天机*) chip isn’t just your standard neuromorphic chip. Rather, it has the architecture of a brain-like chip, but can also run deep learning algorithms—a match made in heaven that basically mashes together neuro-inspired hardware and software.
The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.
The country’s ambition is reflected in the team’s parting words.
“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.
A Hardware Conundrum
Shi’s autonomous bike isn’t the first robotic two-wheeler. Back in 2015, the famed research nonprofit SRI International in Menlo Park, California teamed up with Yamaha to engineer MOTOBOT, a humanoid robot capable of driving a motorcycle. Powered by state-of-the-art robotic hardware and machine learning, MOTOBOT eventually raced MotoGPTM world champion Valentino Rossi in a nail-biting match-off.
However, the technological core of MOTOBOT and Shi’s bike vastly differ, and that difference reflects two pathways towards more powerful AI. One, exemplified by MOTOBOT, is software—developing brain-like algorithms with increasingly efficient architecture, efficacy, and speed. That sounds great, but deep neural nets demand so many computational resources that general-purpose chips can’t keep up.
As Shi told China Science Daily: “CPUs and other chips are driven by miniaturization technologies based on physics. Transistors might shrink to nanoscale-level in 10, 20 years. But what then?” As more transistors are squeezed onto these chips, efficient cooling becomes a limiting factor in computational speed. Tax them too much, and they melt.
For AI processes to continue, we need better hardware. An increasingly popular idea is to build neuromorphic chips, which resemble the brain from the ground up. IBM’s TrueNorth, for example, contains a massively parallel architecture nothing like the traditional Von Neumann structure of classic CPUs and GPUs. Similar to biological brains, TrueNorth’s memory is stored within “synapses” between physical “neurons” etched onto the chip, which dramatically cuts down on energy consumption.
But even these chips are limited. Because computation is tethered to hardware architecture, most chips resemble just one specific type of brain-inspired network called spiking neural networks (SNNs). Without doubt, neuromorphic chips are highly efficient setups with dynamics similar to biological networks. They also don’t play nicely with deep learning and other software-based AI.
Brain-AI Hybrid Core
Shi’s new Tianjic chip brought the two incompatibilities together onto a single piece of brainy hardware.
First was to bridge the deep learning and SNN divide. The two have very different computation philosophies and memory organizations, the team said. The biggest difference, however, is that artificial neural networks transform multidimensional data—image pixels, for example—into a single, continuous, multi-bit 0 and 1 stream. In contrast, neurons in SNNs activate using something called “binary spikes” that code for specific activation events in time.
Confused? Yeah, it’s hard to wrap my head around it too. That’s because SNNs act very similarly to our neural networks and nothing like computers. A particular neuron needs to generate an electrical signal (a “spike”) large enough to transfer down to the next one; little blips in signals don’t count. The way they transmit data also heavily depends on how they’re connected, or the network topology. The takeaway: SNNs work pretty differently than deep learning.
Shi’s team first recreated this firing quirk in the language of computers—0s and 1s—so that the coding mechanism would become compatible with deep learning algorithms. They then carefully aligned the step-by-step building blocks of the two models, which allowed them to tease out similarities into a common ground to further build on. “On the basis of this unified abstraction, we built a cross-paradigm neuron scheme,” they said.
In general, the design allowed both computational approaches to share the synapses, where neurons connect and store data, and the dendrites, the outgoing branches of the neurons. In contrast, the neuron body, where signals integrate, was left reconfigurable for each type of computation, as were the input branches. Each building block was combined into a single unified functional core (FCore), which acts like a deep learning/SNN converter depending on its specific setup. Translation: the chip can do both types of previously incompatible computation.
The Chip
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.
Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.
Although these stats are great, real-life performance is even better as a demo. Here’s where the authors gave their Tianjic brain a body. The team combined one chip with multiple specialized networks to process vision, balance, voice commands, and decision-making in real time. Object detection and target tracking, for example, relied on a deep neural net CNN, whereas voice commands and balance data were recognized using an SNN. The inputs were then integrated inside a neural state machine, which churned out decisions to downstream output modules—for example, controlling the handle bar to turn left.
Thanks to the chip’s brain-like architecture and bilingual ability, Tianjic “allowed all of the neural network models to operate in parallel and realized seamless communication across the models,” the team said. The result is an autonomous bike that rolls after its human, balances across speed bumps, avoids crashing into roadblocks, and answers to voice commands.
General AI?
“It’s a wonderful demonstration and quite impressive,” said the editorial team at Nature, which published the study on its cover last week.
However, they cautioned, when comparing Tianjic with state-of-the-art chips designed for a single problem toe-to-toe on that particular problem, Tianjic falls behind. But building these jack-of-all-trades hybrid chips is definitely worth the effort. Compared to today’s limited AI, what people really want is artificial general intelligence, which will require new architectures that aren’t designed to solve one particular problem.
Until people start to explore, innovate, and play around with different designs, it’s not clear how we can further progress in the pursuit of general AI. A self-driving bike might not be much to look at, but its hybrid brain is a pretty neat place to start.
*The name, in Chinese, means “heavenly machine,” “unknowable mystery of nature,” or “confidentiality.” Go figure.
Image Credit: Alexander Ryabintsev / Shutterstock.com Continue reading
#435046 The Challenge of Abundance: Boredom, ...
As technology continues to progress, the possibility of an abundant future seems more likely. Artificial intelligence is expected to drive down the cost of labor, infrastructure, and transport. Alternative energy systems are reducing the cost of a wide variety of goods. Poverty rates are falling around the world as more people are able to make a living, and resources that were once inaccessible to millions are becoming widely available.
But such a life presents fuel for the most common complaint against abundance: if robots take all the jobs, basic income provides us livable welfare for doing nothing, and healthcare is a guarantee free of charge, then what is the point of our lives? What would motivate us to work and excel if there are no real risks or rewards? If everything is simply given to us, how would we feel like we’ve ever earned anything?
Time has proven that humans inherently yearn to overcome challenges—in fact, this very desire likely exists as the root of most technological innovation. And the idea that struggling makes us stronger isn’t just anecdotal, it’s scientifically validated.
For instance, kids who use anti-bacterial soaps and sanitizers too often tend to develop weak immune systems, causing them to get sick more frequently and more severely. People who work out purposely suffer through torn muscles so that after a few days of healing their muscles are stronger. And when patients visit a psychologist to handle a fear that is derailing their lives, one of the most common treatments is exposure therapy: a slow increase of exposure to the suffering so that the patient gets stronger and braver each time, able to take on an incrementally more potent manifestation of their fears.
Different Kinds of Struggle
It’s not hard to understand why people might fear an abundant future as a terribly mundane one. But there is one crucial mistake made in this assumption, and it was well summarized by Indian mystic and author Sadhguru, who said during a recent talk at Google:
Stomach empty, only one problem. Stomach full—one hundred problems; because what we refer to as human really begins only after survival is taken care of.
This idea is backed up by Maslow’s hierarchy of needs, which was first presented in his 1943 paper “A Theory of Human Motivation.” Maslow shows the steps required to build to higher and higher levels of the human experience. Not surprisingly, the first two levels deal with physiological needs and the need for safety—in other words, with the body. You need to have food, water, and sleep, or you die. After that, you need to be protected from threats, from the elements, from dangerous people, and from disease and pain.
Maslow’s Hierarchy of Needs. Photo by Wikimedia User:Factoryjoe / CC BY-SA 3.0
The beauty of these first two levels is that they’re clear-cut problems with clear-cut solutions: if you’re hungry, then you eat; if you’re thirsty, then you drink; if you’re tired, then you sleep.
But what about the next tiers of the hierarchy? What of love and belonging, of self-esteem and self-actualization? If we’re lonely, can we just summon up an authentic friend or lover? If we feel neglected by society, can we demand it validate us? If we feel discouraged and disappointed in ourselves, can we simply dial up some confidence and self-esteem?
Of course not, and that’s because these psychological needs are nebulous; they don’t contain clear problems with clear solutions. They involve the external world and other people, and are complicated by the infinite flavors of nuance and compromise that are required to navigate human relationships and personal meaning.
These psychological difficulties are where we grow our personalities, outlooks, and beliefs. The truly defining characteristics of a person are dictated not by the physical situations they were forced into—like birth, socioeconomic class, or physical ailment—but instead by the things they choose. So a future of abundance helps to free us from the physical limitations so that we can truly commit to a life of purpose and meaning, rather than just feel like survival is our purpose.
The Greatest Challenge
And that’s the plot twist. This challenge to come to grips with our own individuality and freedom could actually be the greatest challenge our species has ever faced. Can you imagine waking up every day with infinite possibility? Every choice you make says no to the rest of reality, and so every decision carries with it truly life-defining purpose and meaning. That sounds overwhelming. And that’s probably because in our current socio-economic systems, it is.
Studies have shown that people in wealthier nations tend to experience more anxiety and depression. Ron Kessler, professor of health care policy at Harvard and World Health Organization (WHO) researcher, summarized his findings of global mental health by saying, “When you’re literally trying to survive, who has time for depression? Americans, on the other hand, many of whom lead relatively comfortable lives, blow other nations away in the depression factor, leading some to suggest that depression is a ‘luxury disorder.’”
This might explain why America scores in the top rankings for the most depressed and anxious country on the planet. We surpassed our survival needs, and instead became depressed because our jobs and relationships don’t fulfill our expectations for the next three levels of Maslow’s hierarchy (belonging, esteem, and self-actualization).
But a future of abundance would mean we’d have to deal with these levels. This is the challenge for the future; this is what keeps things from being mundane.
As a society, we would be forced to come to grips with our emotional intelligence, to reckon with philosophy rather than simply contemplate it. Nearly every person you meet will be passionately on their own customized life journey, not following a routine simply because of financial limitations. Such a world seems far more vibrant and interesting than one where most wander sleep-deprived and numb while attempting to survive the rat race.
We can already see the forceful hand of this paradigm shift as self-driving cars become ubiquitous. For example, consider the famous psychological and philosophical “trolley problem.” In this thought experiment, a person sees a trolley car heading towards five people on the train tracks; they see a lever that will allow them to switch the trolley car to a track that instead only has one person on it. Do you switch the lever and have a hand in killing one person, or do you let fate continue and kill five people instead?
For the longest time, this was just an interesting quandary to consider. But now, massive corporations have to have an answer, so they can program their self-driving cars with the ability to choose between hitting a kid who runs into the road or swerving into an oncoming car carrying a family of five. When companies need philosophers to make business decisions, it’s a good sign of what’s to come.
Luckily, it’s possible this forceful reckoning with philosophy and our own consciousness may be exactly what humanity needs. Perhaps our great failure as a species has been a result of advanced cognition still trapped in the first two levels of Maslow’s hierarchy due to a long history of scarcity.
As suggested in the opening scenes in 2001: A Space Odyssey, our ape-like proclivity for violence has long stayed the same while the technology we fight with and live amongst has progressed. So while well-off Americans may have comfortable lives, they still know they live in a system where there is no safety net, where a single tragic failure could still mean hunger and homelessness. And because of this, that evolutionarily hard-wired neurotic part of our brain that fears for our survival has never been able to fully relax, and so that anxiety and depression that come with too much freedom but not enough security stays ever present.
Not only might this shift in consciousness help liberate humanity, but it may be vital if we’re to survive our future creations as well. Whatever values we hold dear as a species are the ones we will imbue into the sentient robots we create. If machine learning is going to take its guidance from humanity, we need to level up humanity’s emotional maturity.
While the physical struggles of the future may indeed fall to the wayside amongst abundance, it’s unlikely to become a mundane world; instead, it will become a vibrant culture where each individual is striving against the most important struggle that affects all of us: the challenge to find inner peace, to find fulfillment, to build meaningful relationships, and ultimately, the challenge to find ourselves.
Image Credit: goffkein.pro / Shutterstock.com Continue reading
#434559 Can AI Tell the Difference Between a ...
Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.
Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.
These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.
Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.
They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.
In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.
A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.
If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.
The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.
Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”
Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.
Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.
But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.
By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.
Image Credit: Irvan Pratama / Shutterstock.com Continue reading