Tag Archives: events
#435474 Watch China’s New Hybrid AI Chip Power ...
When I lived in Beijing back in the 90s, a man walking his bike was nothing to look at. But today, I did a serious double-take at a video of a bike walking his man.
No kidding.
The bike itself looks overloaded but otherwise completely normal. Underneath its simplicity, however, is a hybrid computer chip that combines brain-inspired circuits with machine learning processes into a computing behemoth. Thanks to its smart chip, the bike self-balances as it gingerly rolls down a paved track before smoothly gaining speed into a jogging pace while navigating dexterously around obstacles. It can even respond to simple voice commands such as “speed up,” “left,” or “straight.”
Far from a circus trick, the bike is a real-world demo of the AI community’s latest attempt at fashioning specialized hardware to keep up with the challenges of machine learning algorithms. The Tianjic (天机*) chip isn’t just your standard neuromorphic chip. Rather, it has the architecture of a brain-like chip, but can also run deep learning algorithms—a match made in heaven that basically mashes together neuro-inspired hardware and software.
The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.
The country’s ambition is reflected in the team’s parting words.
“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.
A Hardware Conundrum
Shi’s autonomous bike isn’t the first robotic two-wheeler. Back in 2015, the famed research nonprofit SRI International in Menlo Park, California teamed up with Yamaha to engineer MOTOBOT, a humanoid robot capable of driving a motorcycle. Powered by state-of-the-art robotic hardware and machine learning, MOTOBOT eventually raced MotoGPTM world champion Valentino Rossi in a nail-biting match-off.
However, the technological core of MOTOBOT and Shi’s bike vastly differ, and that difference reflects two pathways towards more powerful AI. One, exemplified by MOTOBOT, is software—developing brain-like algorithms with increasingly efficient architecture, efficacy, and speed. That sounds great, but deep neural nets demand so many computational resources that general-purpose chips can’t keep up.
As Shi told China Science Daily: “CPUs and other chips are driven by miniaturization technologies based on physics. Transistors might shrink to nanoscale-level in 10, 20 years. But what then?” As more transistors are squeezed onto these chips, efficient cooling becomes a limiting factor in computational speed. Tax them too much, and they melt.
For AI processes to continue, we need better hardware. An increasingly popular idea is to build neuromorphic chips, which resemble the brain from the ground up. IBM’s TrueNorth, for example, contains a massively parallel architecture nothing like the traditional Von Neumann structure of classic CPUs and GPUs. Similar to biological brains, TrueNorth’s memory is stored within “synapses” between physical “neurons” etched onto the chip, which dramatically cuts down on energy consumption.
But even these chips are limited. Because computation is tethered to hardware architecture, most chips resemble just one specific type of brain-inspired network called spiking neural networks (SNNs). Without doubt, neuromorphic chips are highly efficient setups with dynamics similar to biological networks. They also don’t play nicely with deep learning and other software-based AI.
Brain-AI Hybrid Core
Shi’s new Tianjic chip brought the two incompatibilities together onto a single piece of brainy hardware.
First was to bridge the deep learning and SNN divide. The two have very different computation philosophies and memory organizations, the team said. The biggest difference, however, is that artificial neural networks transform multidimensional data—image pixels, for example—into a single, continuous, multi-bit 0 and 1 stream. In contrast, neurons in SNNs activate using something called “binary spikes” that code for specific activation events in time.
Confused? Yeah, it’s hard to wrap my head around it too. That’s because SNNs act very similarly to our neural networks and nothing like computers. A particular neuron needs to generate an electrical signal (a “spike”) large enough to transfer down to the next one; little blips in signals don’t count. The way they transmit data also heavily depends on how they’re connected, or the network topology. The takeaway: SNNs work pretty differently than deep learning.
Shi’s team first recreated this firing quirk in the language of computers—0s and 1s—so that the coding mechanism would become compatible with deep learning algorithms. They then carefully aligned the step-by-step building blocks of the two models, which allowed them to tease out similarities into a common ground to further build on. “On the basis of this unified abstraction, we built a cross-paradigm neuron scheme,” they said.
In general, the design allowed both computational approaches to share the synapses, where neurons connect and store data, and the dendrites, the outgoing branches of the neurons. In contrast, the neuron body, where signals integrate, was left reconfigurable for each type of computation, as were the input branches. Each building block was combined into a single unified functional core (FCore), which acts like a deep learning/SNN converter depending on its specific setup. Translation: the chip can do both types of previously incompatible computation.
The Chip
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.
Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.
Although these stats are great, real-life performance is even better as a demo. Here’s where the authors gave their Tianjic brain a body. The team combined one chip with multiple specialized networks to process vision, balance, voice commands, and decision-making in real time. Object detection and target tracking, for example, relied on a deep neural net CNN, whereas voice commands and balance data were recognized using an SNN. The inputs were then integrated inside a neural state machine, which churned out decisions to downstream output modules—for example, controlling the handle bar to turn left.
Thanks to the chip’s brain-like architecture and bilingual ability, Tianjic “allowed all of the neural network models to operate in parallel and realized seamless communication across the models,” the team said. The result is an autonomous bike that rolls after its human, balances across speed bumps, avoids crashing into roadblocks, and answers to voice commands.
General AI?
“It’s a wonderful demonstration and quite impressive,” said the editorial team at Nature, which published the study on its cover last week.
However, they cautioned, when comparing Tianjic with state-of-the-art chips designed for a single problem toe-to-toe on that particular problem, Tianjic falls behind. But building these jack-of-all-trades hybrid chips is definitely worth the effort. Compared to today’s limited AI, what people really want is artificial general intelligence, which will require new architectures that aren’t designed to solve one particular problem.
Until people start to explore, innovate, and play around with different designs, it’s not clear how we can further progress in the pursuit of general AI. A self-driving bike might not be much to look at, but its hybrid brain is a pretty neat place to start.
*The name, in Chinese, means “heavenly machine,” “unknowable mystery of nature,” or “confidentiality.” Go figure.
Image Credit: Alexander Ryabintsev / Shutterstock.com Continue reading
#434857 It’s 2019 – where’s my ...
I loved the “Thundercats” cartoon as a child, watching cat-like humanoids fighting the forces of evil. Whenever their leader was in trouble, he'd unleash the Sword of Omens to gain “sight beyond sight,” the ability to see events happening at faraway places, or bellow “Thunder, Thunder, Thunder, Thundercats, Hooo!” to instantaneously summon his allies to his location to join the fight. What kid didn't want those superpowers? Continue reading
#433907 How the Spatial Web Will Fix What’s ...
Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.
This is the age of Web 3.0.
Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.
To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.
During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.
Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.
And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.
In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:
News & Media Coverage
Smart Advertising
Personalized Retail
Let’s dive in.
Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.
The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.
Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.
But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?
Enter mesh networks, AI, public ledgers, and virtual reality.
While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.
Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.
We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.
And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.
The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.
Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.
Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.
With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.
And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.
Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.
But perhaps most exciting is the convergence of artificial intelligence and augmented reality.
As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.
Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.
As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.
After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.
Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.
But what if you’d rather not get auto-populated product info in the first place? No problem!
Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.
And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.
With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.
This brings us to the transformative future of retail.
Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.
As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.
Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.
After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).
Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.
And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.
But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.
Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….
As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.
Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.
Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.
In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: nmedia / Shutterstock.com Continue reading
#433852 How Do We Teach Autonomous Cars To Drive ...
Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.
Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.
What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?
Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.
At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.
Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.
Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.
The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.
Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.
We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.
A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.
The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.
Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.
Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading