Tag Archives: show

#432181 Putting AI in Your Pocket: MIT Chip Cuts ...

Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets’ power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.

Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.

That’s not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user’s control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.

Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip’s memory and processors.

Neural nets consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input from multiple neurons in the layer below it, and if the combined input passes a certain threshold it then transmits an output to multiple neurons above it. The strength of the connection between neurons is governed by a weight, which is set during training.

This means that for every neuron, the chip has to retrieve the input data for a particular connection and the connection weight from memory, multiply them, store the result, and then repeat the process for every input. That requires a lot of data to be moved around, expending a lot of energy.

The new MIT chip does away with that, instead computing all the inputs in parallel within the memory using analog circuits. That significantly reduces the amount of data that needs to be shoved around and results in major energy savings.

The approach requires the weights of the connections to be binary rather than a range of values, but previous theoretical work had suggested this wouldn’t dramatically impact accuracy, and the researchers found the chip’s results were generally within two to three percent of the conventional non-binary neural net running on a standard computer.

This isn’t the first time researchers have created chips that carry out processing in memory to reduce the power consumption of neural nets, but it’s the first time the approach has been used to run powerful convolutional neural networks popular for image-based AI applications.

“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays,” Dario Gil, vice president of artificial intelligence at IBM, said in a statement.

“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”

It’s not just research groups working on this, though. The desire to get AI smarts into devices like smartphones, household appliances, and all kinds of IoT devices is driving the who’s who of Silicon Valley to pile into low-power AI chips.

Apple has already integrated its Neural Engine into the iPhone X to power things like its facial recognition technology, and Amazon is rumored to be developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also increasingly pivoting towards supporting advanced capabilities like machine learning, which has forced them to make their devices ever more energy-efficient. Earlier this year ARM unveiled two new chips: the Arm Machine Learning processor, aimed at general AI tasks from translation to facial recognition, and the Arm Object Detection processor for detecting things like faces in images.

Qualcomm’s latest mobile chip, the Snapdragon 845, features a GPU and is heavily focused on AI. The company has also released the Snapdragon 820E, which is aimed at drones, robots, and industrial devices.

Going a step further, IBM and Intel are developing neuromorphic chips whose architectures are inspired by the human brain and its incredible energy efficiency. That could theoretically allow IBM’s TrueNorth and Intel’s Loihi to run powerful machine learning on a fraction of the power of conventional chips, though they are both still highly experimental at this stage.

Getting these chips to run neural nets as powerful as those found in cloud services without burning through batteries too quickly will be a big challenge. But at the current pace of innovation, it doesn’t look like it will be too long before you’ll be packing some serious AI power in your pocket.

Image Credit: Blue Planet Studio / Shutterstock.com Continue reading

Posted in Human Robots

#432031 Why the Rise of Self-Driving Vehicles ...

It’s been a long time coming. For years Waymo (formerly known as Google Chauffeur) has been diligently developing, driving, testing and refining its fleets of various models of self-driving cars. Now Waymo is going big. The company recently placed an order for several thousand new Chrysler Pacifica minivans and next year plans to launch driverless taxis in a number of US cities.

This deal raises one of the biggest unanswered questions about autonomous vehicles: if fleets of driverless taxis make it cheap and easy for regular people to get around, what’s going to happen to car ownership?

One popular line of thought goes as follows: as autonomous ride-hailing services become ubiquitous, people will no longer need to buy their own cars. This notion has a certain logical appeal. It makes sense to assume that as driverless taxis become widely available, most of us will eagerly sell the family car and use on-demand taxis to get to work, run errands, or pick up the kids. After all, vehicle ownership is pricey and most cars spend the vast majority of their lives parked.

Even experts believe commercial availability of autonomous vehicles will cause car sales to drop.

Market research firm KPMG estimates that by 2030, midsize car sales in the US will decline from today’s 5.4 million units sold each year to nearly half that number, a measly 2.1 million units. Another market research firm, ReThinkX, offers an even more pessimistic estimate (or optimistic, depending on your opinion of cars), predicting that autonomous vehicles will reduce consumer demand for new vehicles by a whopping 70 percent.

The reality is that the impending death of private vehicle sales is greatly exaggerated. Despite the fact that autonomous taxis will be a beneficial and widely-embraced form of urban transportation, we will witness the opposite. Most people will still prefer to own their own autonomous vehicle. In fact, the total number of units of autonomous vehicles sold each year is going to increase rather than decrease.

When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.

Several unique characteristics of autonomous vehicles will ensure that people will continue to buy their own cars.

1. Cost: Thanks to simpler electric engines and lighter auto bodies, autonomous vehicles will be cheaper to buy and maintain than today’s human-driven vehicles. Some estimates bring the price to $10K per vehicle, a stark contrast with today’s average of $30K per vehicle.

2. Personal belongings: Consumers will be able to do much more in their driverless vehicles, including work, play, and rest. This means they will want to keep more personal items in their cars.

3. Frequent upgrades: The average (human-driven) car today is owned for 10 years. As driverless cars become software-driven devices, their price/performance ratio will track to Moore’s law. Their rapid improvement will increase the appeal and frequency of new vehicle purchases.

4. Instant accessibility: In a dense urban setting, a driverless taxi is able to show up within minutes of being summoned. But not so in rural areas, where people live miles apart. For many, delay and “loss of control” over their own mobility will increase the appeal of owning their own vehicle.

5. Diversity of form and function: Autonomous vehicles will be available in a wide variety of sizes and shapes. Consumers will drive demand for custom-made, purpose-built autonomous vehicles whose form is adapted for a particular function.

Let’s explore each of these characteristics in more detail.

Autonomous vehicles will cost less for several reasons. For one, they will be powered by electric engines, which are cheaper to construct and maintain than gasoline-powered engines. Removing human drivers will also save consumers money. Autonomous vehicles will be much less likely to have accidents, hence they can be built out of lightweight, lower-cost materials and will be cheaper to insure. With the human interface no longer needed, autonomous vehicles won’t be burdened by the manufacturing costs of a complex dashboard, steering wheel, and foot pedals.

While hop-on, hop-off autonomous taxi-based mobility services may be ideal for some of the urban population, several sizeable customer segments will still want to own their own cars.

These include people who live in sparsely-populated rural areas who can’t afford to wait extended periods of time for a taxi to appear. Families with children will prefer to own their own driverless cars to house their childrens’ car seats and favorite toys and sippy cups. Another loyal car-buying segment will be die-hard gadget-hounds who will eagerly buy a sexy upgraded model every year or so, unable to resist the siren song of AI that is three times as safe, or a ride that is twice as smooth.

Finally, consider the allure of robotic diversity.

Commuters will invest in a home office on wheels, a sleek, traveling workspace resembling the first-class suite on an airplane. On the high end of the market, city-dwellers and country-dwellers alike will special-order custom-made autonomous vehicles whose shape and on-board gadgetry is adapted for a particular function or hobby. Privately-owned small businesses will buy their own autonomous delivery robot that could range in size from a knee-high, last-mile delivery pod, to a giant, long-haul shipping device.

As autonomous vehicles near commercial viability, Waymo’s procurement deal with Fiat Chrysler is just the beginning.

The exact value of this future automotive industry has yet to be defined, but research from Intel’s internal autonomous vehicle division estimates this new so-called “passenger economy” could be worth nearly $7 trillion a year. To position themselves to capture a chunk of this potential revenue, companies whose businesses used to lie in previously disparate fields such as robotics, software, ships, and entertainment (to name but a few) have begun to form a bewildering web of what they hope will be symbiotic partnerships. Car hailing and chip companies are collaborating with car rental companies, who in turn are befriending giant software firms, who are launching joint projects with all sizes of hardware companies, and so on.

Last year, car companies sold an estimated 80 million new cars worldwide. Over the course of nearly a century, car companies and their partners, global chains of suppliers and service providers, have become masters at mass-producing and maintaining sturdy and cost-effective human-driven vehicles. As autonomous vehicle technology becomes ready for mainstream use, traditional automotive companies are being forced to grapple with the painful realization that they must compete in a new playing field.

The challenge for traditional car-makers won’t be that people no longer want to own cars. Instead, the challenge will be learning to compete in a new and larger transportation industry where consumers will choose their product according to the appeal of its customized body and the quality of its intelligent software.

Melba Kurman and Hod Lipson are the authors of Driverless: Intelligent Cars and the Road Ahead and Fabricated: the New World of 3D Printing.

Image Credit: hfzimages / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#432009 How Swarm Intelligence Is Making Simple ...

As a group, simple creatures following simple rules can display a surprising amount of complexity, efficiency, and even creativity. Known as swarm intelligence, this trait is found throughout nature, but researchers have recently begun using it to transform various fields such as robotics, data mining, medicine, and blockchains.

Ants, for example, can only perform a limited range of functions, but an ant colony can build bridges, create superhighways of food and information, wage war, and enslave other ant species—all of which are beyond the comprehension of any single ant. Likewise, schools of fish, flocks of birds, beehives, and other species exhibit behavior indicative of planning by a higher intelligence that doesn’t actually exist.

It happens by a process called stigmergy. Simply put, a small change by a group member causes other members to behave differently, leading to a new pattern of behavior.

When an ant finds a food source, it marks the path with pheromones. This attracts other ants to that path, leads them to the food source, and prompts them to mark the same path with more pheromones. Over time, the most efficient route will become the superhighway, as the faster and easier a path is, the more ants will reach the food and the more pheromones will be on the path. Thus, it looks as if a more intelligent being chose the best path, but it emerged from the tiny, simple changes made by individuals.

So what does this mean for humans? Well, a lot. In the past few decades, researchers have developed numerous algorithms and metaheuristics, such as ant colony optimization and particle swarm optimization, and they are rapidly being adopted.

Swarm Robotics
A swarm of robots would work on the same principles as an ant colony: each member has a simple set of rules to follow, leading to self-organization and self-sufficiency.

For example, researchers at Georgia Robotics and InTelligent Systems (GRITS) created a small swarm of simple robots that can spell and play piano. The robots cannot communicate, but based solely on the position of surrounding robots, they are able to use their specially-created algorithm to determine the optimal path to complete their task.

This is also immensely useful for drone swarms.

Last February, Ehang, an aviation company out of China, created a swarm of a thousand drones that not only lit the sky with colorful, intricate displays, but demonstrated the ability to improvise and troubleshoot errors entirely autonomously.

Further, just recently, the University of Cambridge and Koc University unveiled their idea for what they call the Energy Neutral Internet of Drones. Amazingly, this drone swarm would take initiative to share information or energy with other drones that did not receive a communication or are running low on energy.

Militaries all of the world are utilizing this as well.

Last year, the US Department of Defense announced it had successfully tested a swarm of miniature drones that could carry out complex missions cheaper and more efficiently. They claimed, “The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing.”

Some experts estimate at least 30 nations are actively developing drone swarms—and even submersible drones—for military missions, including intelligence gathering, missile defense, precision missile strikes, and enhanced communication.

NASA also plans on deploying swarms of tiny spacecraft for space exploration, and the medical community is looking into using swarms of nanobots for precision delivery of drugs, microsurgery, targeting toxins, and biological sensors.

What If Humans Are the Ants?
The strength of any blockchain comes from the size and diversity of the community supporting it. Cryptocurrencies like Bitcoin, Ethereum, and Litecoin are driven by the people using, investing in, and, most importantly, mining them so their blockchains can function. Without an active community, or swarm, their blockchains wither away.

When viewed from a great height, a blockchain performs eerily like an ant colony in that it will naturally find the most efficient way to move vast amounts of information.

Miners compete with each other to perform the complex calculations necessary to add another block, for which the winner is rewarded with the blockchain’s native currency and agreed-upon fees. Of course, the miner with the more powerful computers is more likely to win the reward, thereby empowering the winner’s ability to mine and receive even more rewards. Over time, fewer and fewer miners are going to exist, as the winners are able to more efficiently shoulder more of the workload, in much the same way that ants build superhighways.

Further, a company called Unanimous AI has developed algorithms that allow humans to collectively make predictions. So far, the AI algorithms and their human participants have made some astoundingly accurate predictions, such as the first four winning horses of the Kentucky Derby, the Oscar winners, the Stanley Cup winners, and others. The more people involved in the swarm, the greater their predictive power will be.

To be clear, this is not a prediction based on group consensus. Rather, the swarm of humans uses software to input their opinions in real time, thus making micro-changes to the rest of the swarm and the inputs of other members.

Studies show that swarm intelligence consistently outperforms individuals and crowds working without the algorithms. While this is only the tip of the iceberg, some have suggested swarm intelligence can revolutionize how doctors diagnose a patient or how products are marketed to consumers. It might even be an essential step in truly creating AI.

While swarm intelligence is an essential part of many species’ success, it’s only a matter of time before humans harness its effectiveness as well.

Image Credit: Nature Bird Photography / Shutterstock.com Continue reading

Posted in Human Robots

#431999 Brain-Like Chips Now Beat the Human ...

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.

Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

Experts are hopeful.

“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Software to Hardware
The human brain is the ultimate computational wizard. With roughly 100 billion neurons densely packed into the size of a small football, the brain can deftly handle complex computation at lightning speed using very little energy.

AI experts have taken note. The past few years saw brain-inspired algorithms that can identify faces, falsify voices, and play a variety of games at—and often above—human capability.

But software is only part of the equation. Our current computers, with their transistors and binary digital systems, aren’t equipped to run these powerful algorithms.

That’s where neuromorphic computing comes in. The idea is simple: fabricate a computer chip that mimics the brain at the hardware level. Here, data is both processed and stored within the chip in an analog manner. Each artificial synapse can accumulate and integrate small bits of information from multiple sources and fire only when it reaches a threshold—much like its biological counterpart.

Experts believe the speed and efficiency gains will be enormous.

For one, the chips will no longer have to transfer data between the central processing unit (CPU) and storage blocks, which wastes both time and energy. For another, like biological neural networks, neuromorphic devices can support neurons that run millions of streams of parallel computation.

A “Brain-on-a-chip”
Optimism aside, reproducing the biological synapse in hardware form hasn’t been as easy as anticipated.

Neuromorphic chips exist in many forms, but often look like a nanoscale metal sandwich. The “bread” pieces are generally made of conductive plates surrounding a switching medium—a conductive material of sorts that acts like the gap in a biological synapse.

When a voltage is applied, as in the case of data input, ions move within the switching medium, which then creates conductive streams to stimulate the downstream plate. This change in conductivity mimics the way biological neurons change their “weight,” or the strength of connectivity between two adjacent neurons.

But so far, neuromorphic synapses have been rather unpredictable. According to Kim, that’s because the switching medium is often comprised of material that can’t channel ions to exact locations on the downstream plate.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” explains Kim. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects.”

In his new study, Kim and colleagues swapped the jelly-like switching medium for silicon, a material with only a single line of defects that acts like a channel to guide ions.

The chip starts with a thin wafer of silicon etched with a honeycomb-like pattern. On top is a layer of silicon germanium—something often present in transistors—in the same pattern. This creates a funnel-like dislocation, a kind of Grand Canal that perfectly shuttles ions across the artificial synapse.

The researchers then made a neuromorphic chip containing these synapses and shot an electrical zap through them. Incredibly, the synapses’ response varied by only four percent—much higher than any neuromorphic device made with an amorphous switching medium.

In a computer simulation, the team built a multi-layer artificial neural network using parameters measured from their device. After tens of thousands of training examples, their neural network correctly recognized samples 95 percent of the time, just 2 percent lower than state-of-the-art software algorithms.

The upside? The neuromorphic chip requires much less space than the hardware that runs deep learning algorithms. Forget supercomputers—these chips could one day run complex computations right on our handheld devices.

A Magnetic Boost
Meanwhile, in Boulder, Colorado, Dr. Michael Schneider at the National Institute of Standards and Technology also realized that the standard switching medium had to go.

“There must be a better way to do this, because nature has figured out a better way to do this,” he says.

His solution? Nanoclusters of magnetic manganese.

Schneider’s chip contained two slices of superconducting electrodes made out of niobium, which channel electricity with no resistance. When researchers applied different magnetic fields to the synapse, they could control the alignment of the manganese “filling.”

The switch gave the chip a double boost. For one, by aligning the switching medium, the team could predict the ion flow and boost uniformity. For another, the magnetic manganese itself adds computational power. The chip can now encode data in both the level of electrical input and the direction of the magnetisms without bulking up the synapse.

It seriously worked. At one billion times per second, the chips fired several orders of magnitude faster than human neurons. Plus, the chips required just one ten-thousandth of the energy used by their biological counterparts, all the while synthesizing input from nine different sources in an analog manner.

The Road Ahead
These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.

But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

Many of the special materials used in these chips require specific temperatures, he says. Magnetic manganese chips, for example, require temperatures around absolute zero to operate, meaning they come with the need for giant cooling tanks filled with liquid helium—obviously not practical for everyday use.

Another is scalability. Millions of synapses are necessary before a neuromorphic device can be used to tackle everyday problems such as facial recognition. So far, no deal.

But these problems may in fact be a driving force for the entire field. Intense competition could push teams into exploring different ideas and solutions to similar problems, much like these two studies.

If so, future chips may come in diverse flavors. Similar to our vast array of deep learning algorithms and operating systems, the computer chips of the future may also vary depending on specific requirements and needs.

It is worth developing as many different technological approaches as possible, says Furber, especially as neuroscientists increasingly understand what makes our biological synapses—the ultimate inspiration—so amazingly efficient.

Image Credit: arakio / Shutterstock.com Continue reading

Posted in Human Robots

#431987 OptoForce Industrial Robot Sensors

OptoForce Sensors Providing Industrial Robots with

a “Sense of Touch” to Advance Manufacturing Automation

Global efforts to expand the capabilities of industrial robots are on the rise, as the demand from manufacturing companies to strengthen their operations and improve performance grows.

Hungary-based OptoForce, with a North American office in Charlotte, North Carolina, is one company that continues to support organizations with new robotic capabilities, as evidenced by its several new applications released in 2017.

The company, a leading robotics technology provider of multi-axis force and torque sensors, delivers 6 degrees of freedom force and torque measurement for industrial automation, and provides sensors for most of the currently-used industrial robots.

It recently developed and brought to market three new applications for KUKA industrial robots.

The new applications are hand guiding, presence detection, and center pointing and will be utilized by both end users and systems integrators. Each application is summarized below and what they provide for KUKA robots, along with video demonstrations to show how they operate.

Photo By: www.optoforce.com

Hand Guiding: With OptoForce’s Hand Guiding application, KUKA robots can easily and smoothly move in an assigned direction and selected route. This video shows specifically how to program the robot for hand guiding.

Presence Detection: This application allows KUKA robots to detect the presence of a specific object and to find the object even if it has moved. Visit here to learn more about presence detection.
Center Pointing: With this application, the OptoForce sensor helps the KUKA robot find the center point of an object by providing the robot with a sense of touch. This solution also works with glossy metal objects where a vision system would not be able to define its position. This video shows in detail how the center pointing application works.

The company’s CEO explained how these applications help KUKA robots and industrial automation.

Photo By: www.optoforce.com
“OptoForce’s new applications for KUKA robots pave the way for substantial improvements in industrial automation for both end users and systems integrators,” said Ákos Dömötör, CEO of OptoForce. “Our 6-axis force/torque sensors are combined with highly functional hardware and a comprehensive software package, which include the pre-programmed industrial applications. Essentially, we’re adding a ‘sense of touch’ to KUKA robot arms, enabling these robots to have abilities similar to a human hand, and opening up numerous new capabilities in industrial automation.”

Along with these new applications recently released for KUKA robots, OptoForce sensors are also being used by various companies on numerous industrial robots and manufacturing automation projects around the world. Examples of other uses include: path recording, polishing plastic and metal, box insertion, placing pins in holes, stacking/destacking, palletizing, and metal part sanding.

Specifically, some of the projects current underway by companies include: a plastic parting line removal; an obstacle detection for a major car manufacturing company; and a center point insertion application for a car part supplier, where the task of the robot is to insert a mirror, completely centered, onto a side mirror housing.

For more information, visit www.optoforce.com.

This post was provided by: OptoForce

The post OptoForce Industrial Robot Sensors appeared first on Roboticmagazine. Continue reading

Posted in Human Robots