Tag Archives: popular

#435824 A Q&A with Cruise’s head of AI, ...

In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.

Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.

Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.

Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.

IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?

Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.

“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”

I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.

In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.

There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.

IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?

Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.

Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.

I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.

The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.

Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.

IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?

Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.

And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”

IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?

Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.

“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”

Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.

It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.

I can see how I can spend a lifetime career trying to solve this problem.

IEEE Spectrum: Why are you doing most of your development in San Francisco?

Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.

“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”

[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.

A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading

Posted in Human Robots

#435648 Surprisingly Speedy Soft Robot Survives ...

Soft robots are getting more and more popular for some very good reasons. Their relative simplicity is one. Their relative low cost is another. And for their simplicity and low cost, they’re generally able to perform very impressively, leveraging the unique features inherent to their design and construction to move themselves and interact with their environment. The other significant reason why soft robots are so appealing is that they’re durable. Without the constraints of rigid parts, they can withstand the sort of abuse that would make any roboticist cringe.

In the current issue of Science Robotics, a group of researchers from Tsinghua University in China and University of California, Berkeley, present a new kind of soft robot that’s both higher performance and much more robust than just about anything we’ve seen before. The deceptively simple robot looks like a bent strip of paper, but it’s able to move at 20 body lengths per second and survive being stomped on by a human wearing tennis shoes. Take that, cockroaches.

This prototype robot measures just 3 centimeters by 1.5 cm. It takes a scanning electron microscope to actually see what the robot is made of—a thermoplastic layer is sandwiched by palladium-gold electrodes, bonded with adhesive silicone to a structural plastic at the bottom. When an AC voltage (as low as 8 volts but typically about 60 volts) is run through the electrodes, the thermoplastic extends and contracts, causing the robot’s back to flex and the little “foot” to shuffle. A complete step cycle takes just 50 milliseconds, yielding a 200 hertz gait. And technically, the robot “runs,” since it does have a brief aerial phase.

Image: Science Robotics

Photos from a high-speed camera show the robot’s gait (A to D) as it contracts and expands its body.

To put the robot’s top speed of 20 body lengths per second in perspective, have a look at this nifty chart, which shows where other animals relative running speeds of some animals and robots versus body mass:

Image: Science Robotics

This chart shows the relative running speeds of some mammals (purple area), arthropods (orange area), and soft robots (blue area) versus body mass. For both mammals and arthropods, relative speeds show a strong negative scaling law with respect to the body mass: speeds increase as body masses decrease. However, for soft robots, the relationship appears to be the opposite: speeds decrease as the body mass decrease. For the little soft robots created by the researchers from Tsinghua University and UC Berkeley (red stars), the scaling law is similar to that of living animals: Higher speed was attained as the body mass decreased.

If you were wondering, like we were, just what that number 39 is on that chart (top left corner), it’s a species of tiny mite that was discovered underneath a rock in California in 1916. The mite is just under 1 mm in size, but it can run at 0.8 kilometer per hour, which is 322 body lengths per second, making it by far (like, by a factor of two at least) the fastest land animal on Earth relative to size. If a human was to run that fast relative to our size, we’d be traveling at a little bit over 2,000 kilometers per hour. It’s not a coincidence that pretty much everything in the upper left of the chart is an insect—speed scales favorably with decreasing mass, since actuators have a proportionally larger effect.

Other notable robots on the chart with impressive speed to mass ratios are number 27, which is this magnetically driven quadruped robot from UMD, and number 86, UC Berkeley’s X2-VelociRoACH.

Anyway, back to this robot. Some other cool things about it:

You can step on it, squishing it flat with a load about 1 million times its own body weight, and it’ll keep on crawling, albeit only half as fast.
Even climbing a slope of 15 degrees, it can still manage to move at 1 body length per second.
It carries peanuts! With a payload of six times its own weight, it moves a sixth as fast, but still, it’s not like you need your peanuts delivered all that quickly anyway, do you?

Image: Science Robotics

The researchers also put together a prototype with two legs instead of one, which was able to demonstrate a potentially faster galloping gait by spending more time in the air. They suggest that robots like these could be used for “environmental exploration, structural inspection, information reconnaissance, and disaster relief,” which are the sorts of things that you suggest that your robot could be used for when you really have no idea what it could be used for. But this work is certainly impressive, with speed and robustness that are largely unmatched by other soft robots. An untethered version seems possible due to the relatively low voltages required to drive the robot, and if they can put some peanut-sized sensors on there as well, practical applications might actually be forthcoming sometime soon.

“Insect-scale Fast Moving and Ultrarobust Soft Robot,” by Yichuan Wu, Justin K. Yim, Jiaming Liang, Zhichun Shao, Mingjing Qi, Junwen Zhong, Zihao Luo, Xiaojun Yan, Min Zhang, Xiaohao Wang, Ronald S. Fearing, Robert J. Full, and Liwei Lin from Tsinghua University and UC Berkeley, is published in Science Robotics. Continue reading

Posted in Human Robots

#435614 3 Easy Ways to Evaluate AI Claims

When every other tech startup claims to use artificial intelligence, it can be tough to figure out if an AI service or product works as advertised. In the midst of the AI “gold rush,” how can you separate the nuggets from the fool’s gold?

There’s no shortage of cautionary tales involving overhyped AI claims. And applying AI technologies to health care, education, and law enforcement mean that getting it wrong can have real consequences for society—not just for investors who bet on the wrong unicorn.

So IEEE Spectrum asked experts to share their tips for how to identify AI hype in press releases, news articles, research papers, and IPO filings.

“It can be tricky, because I think the people who are out there selling the AI hype—selling this AI snake oil—are getting more sophisticated over time,” says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative.

The term “AI” is perhaps most frequently used to describe machine learning algorithms (and deep learning algorithms, which require even less human guidance) that analyze huge amounts of data and make predictions based on patterns that humans might miss. These popular forms of AI are mostly suited to specialized tasks, such as automatically recognizing certain objects within photos. For that reason, they are sometimes described as “weak” or “narrow” AI.

Some researchers and thought leaders like to talk about the idea of “artificial general intelligence” or “strong AI” that has human-level capacity and flexibility to handle many diverse intellectual tasks. But for now, this type of AI remains firmly in the realm of science fiction and is far from being realized in the real world.

“AI has no well-defined meaning and many so-called AI companies are simply trying to take advantage of the buzz around that term,” says Arvind Narayanan, a computer scientist at Princeton University. “Companies have even been caught claiming to use AI when, in fact, the task is done by human workers.”

Here are three ways to recognize AI hype.

Look for Buzzwords
One red flag is what Hwang calls the “hype salad.” This means stringing together the term “AI” with many other tech buzzwords such as “blockchain” or “Internet of Things.” That doesn’t automatically disqualify the technology, but spotting a high volume of buzzwords in a post, pitch, or presentation should raise questions about what exactly the company or individual has developed.

Other experts agree that strings of buzzwords can be a red flag. That’s especially true if the buzzwords are never really explained in technical detail, and are simply tossed around as vague, poorly-defined terms, says Marzyeh Ghassemi, a computer scientist and biomedical engineer at the University of Toronto in Canada.

“I think that if it looks like a Google search—picture ‘interpretable blockchain AI deep learning medicine’—it's probably not high-quality work,” Ghassemi says.

Hwang also suggests mentally replacing all mentions of “AI” in an article with the term “magical fairy dust.” It’s a way of seeing whether an individual or organization is treating the technology like magic. If so—that’s another good reason to ask more questions about what exactly the AI technology involves.

And even the visual imagery used to illustrate AI claims can indicate that an individual or organization is overselling the technology.

“I think that a lot of the people who work on machine learning on a day-to-day basis are pretty humble about the technology, because they’re largely confronted with how frequently it just breaks and doesn't work,” Hwang says. “And so I think that if you see a company or someone representing AI as a Terminator head, or a big glowing HAL eye or something like that, I think it’s also worth asking some questions.”

Interrogate the Data

It can be hard to evaluate AI claims without any relevant expertise, says Ghassemi at the University of Toronto. Even experts need to know the technical details of the AI algorithm in question and have some access to the training data that shaped the AI model’s predictions. Still, savvy readers with some basic knowledge of applied statistics can search for red flags.

To start, readers can look for possible bias in training data based on small sample sizes or a skewed population that fails to reflect the broader population, Ghassemi says. After all, an AI model trained only on health data from white men would not necessarily achieve similar results for other populations of patients.

“For me, a red flag is not demonstrating deep knowledge of how your labels are defined.”
—Marzyeh Ghassemi, University of Toronto

How machine learning and deep learning models perform also depends on how well humans labeled the sample datasets use to train these programs. This task can be straightforward when labeling photos of cats versus dogs, but gets more complicated when assigning disease diagnoses to certain patient cases.

Medical experts frequently disagree with each other on diagnoses—which is why many patients seek a second opinion. Not surprisingly, this ambiguity can also affect the diagnostic labels that experts assign in training datasets. “For me, a red flag is not demonstrating deep knowledge of how your labels are defined,” Ghassemi says.

Such training data can also reflect the cultural stereotypes and biases of the humans who labeled the data, says Narayanan at Princeton University. Like Ghassemi, he recommends taking a hard look at exactly what the AI has learned: “A good way to start critically evaluating AI claims is by asking questions about the training data.”

Another red flag is presenting an AI system’s performance through a single accuracy figure without much explanation, Narayanan says. Claiming that an AI model achieves “99 percent” accuracy doesn’t mean much without knowing the baseline for comparison—such as whether other systems have already achieved 99 percent accuracy—or how well that accuracy holds up in situations beyond the training dataset.

Narayanan also emphasized the need to ask questions about an AI model’s false positive rate—the rate of making wrong predictions about the presence of a given condition. Even if the false positive rate of a hypothetical AI service is just one percent, that could have major consequences if that service ends up screening millions of people for cancer.

Readers can also consider whether using AI in a given situation offers any meaningful improvement compared to traditional statistical methods, says Clayton Aldern, a data scientist and journalist who serves as managing director for Caldern LLC. He gave the hypothetical example of a “super-duper-fancy deep learning model” that achieves a prediction accuracy of 89 percent, compared to a “little polynomial regression model” that achieves 86 percent on the same dataset.

“We're talking about a three-percentage-point increase on something that you learned about in Algebra 1,” Aldern says. “So is it worth the hype?”

Don’t Ignore the Drawbacks

The hype surrounding AI isn’t just about the technical merits of services and products driven by machine learning. Overblown claims about the beneficial impacts of AI technology—or vague promises to address ethical issues related to deploying it—should also raise red flags.

“If a company promises to use its tech ethically, it is important to question if its business model aligns with that promise,” Narayanan says. “Even if employees have noble intentions, it is unrealistic to expect the company as a whole to resist financial imperatives.”

One example might be a company with a business model that depends on leveraging customers’ personal data. Such companies “tend to make empty promises when it comes to privacy,” Narayanan says. And, if companies hire workers to produce training data, it’s also worth asking whether the companies treat those workers ethically.

The transparency—or lack thereof—about any AI claim can also be telling. A company or research group can minimize concerns by publishing technical claims in peer-reviewed journals or allowing credible third parties to evaluate their AI without giving away big intellectual property secrets, Narayanan says. Excessive secrecy is a big red flag.

With these strategies, you don’t need to be a computer engineer or data scientist to start thinking critically about AI claims. And, Narayanan says, the world needs many people from different backgrounds for societies to fully consider the real-world implications of AI.

Editor’s Note: The original version of this story misspelled Clayton Aldern’s last name as Alderton. Continue reading

Posted in Human Robots

#435605 All of the Winners in the DARPA ...

The first competitive event in the DARPA Subterranean Challenge concluded last week—hopefully you were able to follow along on the livestream, on Twitter, or with some of the articles that we’ve posted about the event. We’ll have plenty more to say about how things went for the SubT teams, but while they take a bit of a (well earned) rest, we can take a look at the winning teams as well as who won DARPA’s special superlative awards for the competition.

First Place: Team Explorer (25/40 artifacts found)
With their rugged, reliable robots featuring giant wheels and the ability to drop communications nodes, Team Explorer was in the lead from day 1, scoring in double digits on every single run.

Second Place: Team CoSTAR (11/40 artifacts found)
Team CoSTAR had one of the more diverse lineups of robots, and they switched up which robots they decided to send into the mine as they learned more about the course.

Third Place: Team CTU-CRAS (10/40 artifacts found)
While many teams came to SubT with DARPA funding, Team CTU-CRAS was self-funded, making them eligible for a special $200,000 Tunnel Circuit prize.

DARPA also awarded a bunch of “superlative awards” after SubT:

Most Accurate Artifact: Team Explorer

To score a point, teams had to submit the location of an artifact that was correct to within 5 meters of the artifact itself. However, DARPA was tracking the artifact locations with much higher precision—for example, the “zero” point on the backpack artifact was the center of the label on the front, which DARPA tracked to the millimeter. Team Explorer managed to return the location of a backpack with an error of just 0.18 meter, which is kind of amazing.

Down to the Wire: Team CSIRO Data61

With just an hour to find as many artifacts as possible, teams had to find the right balance between sending robots off to explore and bringing them back into communication range to download artifact locations. Team CSIRO Data61 cut their last point pretty close, sliding their final point in with a mere 22 seconds to spare.

Most Distinctive Robots: Team Robotika

Team Robotika had some of the quirkiest and most recognizable robots, which DARPA recognized with the “Most Distinctive” award. Robotika told us that part of the reason for that distinctiveness was practical—having a robot that was effectively in two parts meant that they could disassemble it so that it would fit in the baggage compartment of an airplane, very important for a team based in the Czech Republic.

Most Robots Per Person: Team Coordinated Robotics

Kevin Knoedler, who won NASA’s Space Robotics Challenge entirely by himself, brought his own personal swarm of drones to SubT. With a ratio of seven robots to one human, Kevin was almost certainly the hardest working single human at the challenge.

Fan Favorite: Team NCTU

Photo: Evan Ackerman/IEEE Spectrum

The Fan Favorite award went to the team that was most popular on Twitter (with the #SubTChallenge hashtag), and it may or may not be the case that I personally tweeted enough about Team NCTU’s blimp to win them this award. It’s also true that whenever we asked anyone on other teams what their favorite robot was (besides their own, of course), the blimp was overwhelmingly popular. So either way, the award is well deserved.

DARPA shared this little behind-the-scenes clip of the blimp in action (sort of), showing what happened to the poor thing when the mine ventilation system was turned on between runs and DARPA staff had to chase it down and rescue it:

The thing to keep in mind about the results of the Tunnel Circuit is that unlike past DARPA robotics challenges (like the DRC), they don’t necessarily indicate how things are going to go for the Urban or Cave circuits because of how different things are going to be. Explorer did a great job with a team of rugged wheeled vehicles, which turned out to be ideal for navigating through mines, but they’re likely going to need to change things up substantially for the rest of the challenges, where the terrain will be much more complex.

DARPA hasn’t provided any details on the location of the Urban Circuit yet; all we know is that it’ll be sometime in February 2020. This gives teams just six months to take all the lessons that they learned from the Tunnel Circuit and update their hardware, software, and strategies. What were those lessons, and what do teams plan to do differently next year? Check back next week, and we’ll tell you.

[ DARPA SubT ] Continue reading

Posted in Human Robots

#435474 Watch China’s New Hybrid AI Chip Power ...

When I lived in Beijing back in the 90s, a man walking his bike was nothing to look at. But today, I did a serious double-take at a video of a bike walking his man.

No kidding.

The bike itself looks overloaded but otherwise completely normal. Underneath its simplicity, however, is a hybrid computer chip that combines brain-inspired circuits with machine learning processes into a computing behemoth. Thanks to its smart chip, the bike self-balances as it gingerly rolls down a paved track before smoothly gaining speed into a jogging pace while navigating dexterously around obstacles. It can even respond to simple voice commands such as “speed up,” “left,” or “straight.”

Far from a circus trick, the bike is a real-world demo of the AI community’s latest attempt at fashioning specialized hardware to keep up with the challenges of machine learning algorithms. The Tianjic (天机*) chip isn’t just your standard neuromorphic chip. Rather, it has the architecture of a brain-like chip, but can also run deep learning algorithms—a match made in heaven that basically mashes together neuro-inspired hardware and software.

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

A Hardware Conundrum
Shi’s autonomous bike isn’t the first robotic two-wheeler. Back in 2015, the famed research nonprofit SRI International in Menlo Park, California teamed up with Yamaha to engineer MOTOBOT, a humanoid robot capable of driving a motorcycle. Powered by state-of-the-art robotic hardware and machine learning, MOTOBOT eventually raced MotoGPTM world champion Valentino Rossi in a nail-biting match-off.

However, the technological core of MOTOBOT and Shi’s bike vastly differ, and that difference reflects two pathways towards more powerful AI. One, exemplified by MOTOBOT, is software—developing brain-like algorithms with increasingly efficient architecture, efficacy, and speed. That sounds great, but deep neural nets demand so many computational resources that general-purpose chips can’t keep up.

As Shi told China Science Daily: “CPUs and other chips are driven by miniaturization technologies based on physics. Transistors might shrink to nanoscale-level in 10, 20 years. But what then?” As more transistors are squeezed onto these chips, efficient cooling becomes a limiting factor in computational speed. Tax them too much, and they melt.

For AI processes to continue, we need better hardware. An increasingly popular idea is to build neuromorphic chips, which resemble the brain from the ground up. IBM’s TrueNorth, for example, contains a massively parallel architecture nothing like the traditional Von Neumann structure of classic CPUs and GPUs. Similar to biological brains, TrueNorth’s memory is stored within “synapses” between physical “neurons” etched onto the chip, which dramatically cuts down on energy consumption.

But even these chips are limited. Because computation is tethered to hardware architecture, most chips resemble just one specific type of brain-inspired network called spiking neural networks (SNNs). Without doubt, neuromorphic chips are highly efficient setups with dynamics similar to biological networks. They also don’t play nicely with deep learning and other software-based AI.

Brain-AI Hybrid Core
Shi’s new Tianjic chip brought the two incompatibilities together onto a single piece of brainy hardware.

First was to bridge the deep learning and SNN divide. The two have very different computation philosophies and memory organizations, the team said. The biggest difference, however, is that artificial neural networks transform multidimensional data—image pixels, for example—into a single, continuous, multi-bit 0 and 1 stream. In contrast, neurons in SNNs activate using something called “binary spikes” that code for specific activation events in time.

Confused? Yeah, it’s hard to wrap my head around it too. That’s because SNNs act very similarly to our neural networks and nothing like computers. A particular neuron needs to generate an electrical signal (a “spike”) large enough to transfer down to the next one; little blips in signals don’t count. The way they transmit data also heavily depends on how they’re connected, or the network topology. The takeaway: SNNs work pretty differently than deep learning.

Shi’s team first recreated this firing quirk in the language of computers—0s and 1s—so that the coding mechanism would become compatible with deep learning algorithms. They then carefully aligned the step-by-step building blocks of the two models, which allowed them to tease out similarities into a common ground to further build on. “On the basis of this unified abstraction, we built a cross-paradigm neuron scheme,” they said.

In general, the design allowed both computational approaches to share the synapses, where neurons connect and store data, and the dendrites, the outgoing branches of the neurons. In contrast, the neuron body, where signals integrate, was left reconfigurable for each type of computation, as were the input branches. Each building block was combined into a single unified functional core (FCore), which acts like a deep learning/SNN converter depending on its specific setup. Translation: the chip can do both types of previously incompatible computation.

The Chip
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

Although these stats are great, real-life performance is even better as a demo. Here’s where the authors gave their Tianjic brain a body. The team combined one chip with multiple specialized networks to process vision, balance, voice commands, and decision-making in real time. Object detection and target tracking, for example, relied on a deep neural net CNN, whereas voice commands and balance data were recognized using an SNN. The inputs were then integrated inside a neural state machine, which churned out decisions to downstream output modules—for example, controlling the handle bar to turn left.

Thanks to the chip’s brain-like architecture and bilingual ability, Tianjic “allowed all of the neural network models to operate in parallel and realized seamless communication across the models,” the team said. The result is an autonomous bike that rolls after its human, balances across speed bumps, avoids crashing into roadblocks, and answers to voice commands.

General AI?
“It’s a wonderful demonstration and quite impressive,” said the editorial team at Nature, which published the study on its cover last week.

However, they cautioned, when comparing Tianjic with state-of-the-art chips designed for a single problem toe-to-toe on that particular problem, Tianjic falls behind. But building these jack-of-all-trades hybrid chips is definitely worth the effort. Compared to today’s limited AI, what people really want is artificial general intelligence, which will require new architectures that aren’t designed to solve one particular problem.

Until people start to explore, innovate, and play around with different designs, it’s not clear how we can further progress in the pursuit of general AI. A self-driving bike might not be much to look at, but its hybrid brain is a pretty neat place to start.

*The name, in Chinese, means “heavenly machine,” “unknowable mystery of nature,” or “confidentiality.” Go figure.

Image Credit: Alexander Ryabintsev / Shutterstock.com Continue reading

Posted in Human Robots