Tag Archives: central

#433776 Why We Should Stop Conflating Human and ...

It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.

Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.

It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.

Neural Networks
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.

But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).

This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).

Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.

The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.

Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.

In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.

Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.

Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.

Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.

Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.

But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.

Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.

Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.

AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.

Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.

Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.

Image Credit: Liu zishan/Shutterstock Continue reading

Posted in Human Robots

#433739 No Safety Driver Here—Volvo’s New ...

Each time there’s a headline about driverless trucking technology, another piece is taken out of the old equation. First, an Uber/Otto truck’s safety driver went hands-off once the truck reached the highway (and said truck successfully delivered its valuable cargo of 50,000 beers). Then, Starsky Robotics announced its trucks would start making autonomous deliveries without a human in the vehicle at all.

Now, Volvo has taken the tech one step further. Its new trucks not only won’t have safety drivers, they won’t even have the option of putting safety drivers behind the wheel, because there is no wheel—and no cab, either.

Vera, as the technology’s been dubbed, was unveiled in September, and consists of a sort of flat-Tesla-like electric car with a standard trailer hookup. The vehicles are connected to a cloud service, which also connects them to each other and to a control center. The control center monitors the trucks’ positioning (they’re designed to locate their position to within centimeters), battery charge, load content, service requirements, and other variables. The driveline and battery pack used in the cars are the same as those Volvo uses in its existing electric trucks.

You won’t see these cruising down an interstate highway, though, or even down a local highway. Vera trucks are designed to be used on short, repetitive routes contained within limited areas—think shipping ports, industrial parks, or logistics hubs. They’re limited to slower speeds than normal cars or trucks, and will be able to operate 24/7. “We will see much higher delivery precision, as well as improved flexibility and productivity,” said Mikael Karlsson, VP of Autonomous Solutions at Volvo Trucks. “Today’s operations are often designed according to standard daytime work hours, but a solution like Vera opens up the possibility of continuous round-the-clock operation and a more optimal flow. This in turn can minimize stock piles and increase overall productivity.”

The trucks are sort of like bigger versions of Amazon’s Kiva robots, which scoot around the aisles of warehouses and fulfillment centers moving pallets between shelves and fetching goods to be shipped.

Pairing trucks like Vera with robots like Kiva makes for a fascinating future landscape of logistics and transport; cargo will be moved from docks to warehouses by a large, flat robot-on-wheels, then distributed throughout that warehouse by smaller, flat robots-on-wheels. To really see the automated process through to the end point, even smaller flat robots-on-wheels will be used to deliver peoples’ goods right to their front doors.

Sounds like a lot of robots and not a lot of humans, right? Anticipating its technology’s implication in the ongoing uproar over technological unemployment, Volvo has already made statements about its intentions to continue to employ humans alongside the driverless trucks. “I foresee that there will be an increased level of automation where it makes sense, such as for repetitive tasks. This in turn will drive prosperity and increase the need for truck drivers in other applications,” said Karlsson.

The end-to-end automation concept has already been put into practice in Caofeidian, a northern Chinese city that houses the world’s first fully autonomous harbor, aiming to be operational by the end of this year. Besides replacing human-driven trucks with autonomous ones (made by Chinese startup TuSimple), the port is using automated cranes and a coordinating central control system.

Besides Uber/Otto, Tesla, or Daimler, which are all working on driverless trucks with a more conventional design (meaning they still have a cab and look like you’d expect a truck to look), Volvo also has competition from a company called Einride. The Swedish startup’s electric, cabless T/Pod looks a lot like Vera, but has some fundamental differences. Rather than being tailored to short distances and high capacity, Einride’s trucks are meant for medium distance and capacity, like moving goods from a distribution center to a series of local stores.

Vera trucks are currently still in the development phase. But since their intended use is quite specific and limited (Karlsson noted “Vera is not intended to be a solution for everyone, everywhere”), the technology could likely be rolled out faster than its more general-use counterparts. Having cabless electric trucks take over short routes in closed environments would be one more baby step along the road to a driverless future—and a testament to the fact that self-driving technology will move into our lives and our jobs incrementally, ostensibly giving us the time we’ll need to adapt and adjust.

Image Credit: Volvo Trucks Continue reading

Posted in Human Robots

#432878 Chinese Port Goes Full Robot With ...

By the end of 2018, something will be very different about the harbor area in the northern Chinese city of Caofeidian. If you were to visit, the whirring cranes and tractors driving containers to and fro would be the only things in sight.

Caofeidian is set to become the world’s first fully autonomous harbor by the end of the year. The US-Chinese startup TuSimple, a specialist in developing self-driving trucks, will replace human-driven terminal tractor-trucks with 20 self-driving models. A separate company handles crane automation, and a central control system will coordinate the movements of both.

According to Robert Brown, Director of Public Affairs at TuSimple, the project could quickly transform into a much wider trend. “The potential for automating systems in harbors and ports is staggering when considering the number of deep-water and inland ports around the world. At the same time, the closed, controlled nature of a port environment makes it a perfect proving ground for autonomous truck technology,” he said.

Going Global
The autonomous cranes and trucks have a big task ahead of them. Caofeidian currently processes around 300,000 TEU containers a year. Even if you were dealing with Lego bricks, that number of units would get you a decent-sized cathedral or a 22-foot-long aircraft carrier. For any maritime fans—or people who enjoy the moving of heavy objects—TEU stands for twenty-foot equivalent unit. It is the industry standard for containers. A TEU equals an 8-foot (2.43 meter) wide, 8.5-foot (2.59 meter) high, and 20-foot (6.06 meter) long container.

While impressive, the Caofeidian number pales in comparison with the biggest global ports like Shanghai, Singapore, Busan, or Rotterdam. For example, 2017 saw more than 40 million TEU moved through Shanghai port facilities.

Self-driving container vehicles have been trialled elsewhere, including in Yangshan, close to Shanghai, and Rotterdam. Qingdao New Qianwan Container Terminal in China recently laid claim to being the first fully automated terminal in Asia.

The potential for efficiencies has many ports interested in automation. Qingdao said its systems allow the terminal to operate in complete darkness and have reduced labor costs by 70 percent while increasing efficiency by 30 percent. In some cases, the number of workers needed to unload a cargo ship has gone from 60 to 9.

TuSimple says it is in negotiations with several other ports and also sees potential in related logistics-heavy fields.

Stable Testing Ground
For autonomous vehicles, ports seem like a perfect testing ground. They are restricted, confined areas with few to no pedestrians where operating speeds are limited. The predictability makes it unlike, say, city driving.

Robert Brown describes it as an ideal setting for the first adaptation of TuSimple’s technology. The company, which, amongst others, is backed by chipmaker Nvidia, have been retrofitting existing vehicles from Shaanxi Automobile Group with sensors and technology.

At the same time, it is running open road tests in Arizona and China of its Class 8 Level 4 autonomous trucks.

The Camera Approach
Dozens of autonomous truck startups are reported to have launched in China over the past two years. In other countries the situation is much the same, as the race for the future of goods transportation heats up. Startup companies like Embark, Einride, Starsky Robotics, and Drive.ai are just a few of the names in the space. They are facing competition from the likes of Tesla, Daimler, VW, Uber’s Otto subsidiary, and in March, Waymo announced it too was getting into the truck race.

Compared to many of its competitors, TuSimple’s autonomous driving system is based on a different approach. Instead of laser-based radar (LIDAR), TuSimple primarily uses cameras to gather data about its surroundings. Currently, the company uses ten cameras, including forward-facing, backward-facing, and wide-lens. Together, they produce the 360-degree “God View” of the vehicle’s surroundings, which is interpreted by the onboard autonomous driving systems.

Each camera gathers information at 30 frames a second. Millimeter wave radar is used as a secondary sensor. In total, the vehicles generate what Robert Brown describes with a laugh as “almost too much” data about its surroundings and is accurate beyond 300 meters in locating and identifying objects. This includes objects that have given LIDAR problems, such as black vehicles.

Another advantage is price. Companies often loathe revealing exact amounts, but Tesla has gone as far as to say that the ‘expected’ price of its autonomous truck will be from $150,0000 and upwards. While unconfirmed, TuSimple’s retrofitted, camera-based solution is thought to cost around $20,000.

Image Credit: chinahbzyg / Shutterstock.com Continue reading

Posted in Human Robots

#431999 Brain-Like Chips Now Beat the Human ...

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.

Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

Experts are hopeful.

“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Software to Hardware
The human brain is the ultimate computational wizard. With roughly 100 billion neurons densely packed into the size of a small football, the brain can deftly handle complex computation at lightning speed using very little energy.

AI experts have taken note. The past few years saw brain-inspired algorithms that can identify faces, falsify voices, and play a variety of games at—and often above—human capability.

But software is only part of the equation. Our current computers, with their transistors and binary digital systems, aren’t equipped to run these powerful algorithms.

That’s where neuromorphic computing comes in. The idea is simple: fabricate a computer chip that mimics the brain at the hardware level. Here, data is both processed and stored within the chip in an analog manner. Each artificial synapse can accumulate and integrate small bits of information from multiple sources and fire only when it reaches a threshold—much like its biological counterpart.

Experts believe the speed and efficiency gains will be enormous.

For one, the chips will no longer have to transfer data between the central processing unit (CPU) and storage blocks, which wastes both time and energy. For another, like biological neural networks, neuromorphic devices can support neurons that run millions of streams of parallel computation.

A “Brain-on-a-chip”
Optimism aside, reproducing the biological synapse in hardware form hasn’t been as easy as anticipated.

Neuromorphic chips exist in many forms, but often look like a nanoscale metal sandwich. The “bread” pieces are generally made of conductive plates surrounding a switching medium—a conductive material of sorts that acts like the gap in a biological synapse.

When a voltage is applied, as in the case of data input, ions move within the switching medium, which then creates conductive streams to stimulate the downstream plate. This change in conductivity mimics the way biological neurons change their “weight,” or the strength of connectivity between two adjacent neurons.

But so far, neuromorphic synapses have been rather unpredictable. According to Kim, that’s because the switching medium is often comprised of material that can’t channel ions to exact locations on the downstream plate.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” explains Kim. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects.”

In his new study, Kim and colleagues swapped the jelly-like switching medium for silicon, a material with only a single line of defects that acts like a channel to guide ions.

The chip starts with a thin wafer of silicon etched with a honeycomb-like pattern. On top is a layer of silicon germanium—something often present in transistors—in the same pattern. This creates a funnel-like dislocation, a kind of Grand Canal that perfectly shuttles ions across the artificial synapse.

The researchers then made a neuromorphic chip containing these synapses and shot an electrical zap through them. Incredibly, the synapses’ response varied by only four percent—much higher than any neuromorphic device made with an amorphous switching medium.

In a computer simulation, the team built a multi-layer artificial neural network using parameters measured from their device. After tens of thousands of training examples, their neural network correctly recognized samples 95 percent of the time, just 2 percent lower than state-of-the-art software algorithms.

The upside? The neuromorphic chip requires much less space than the hardware that runs deep learning algorithms. Forget supercomputers—these chips could one day run complex computations right on our handheld devices.

A Magnetic Boost
Meanwhile, in Boulder, Colorado, Dr. Michael Schneider at the National Institute of Standards and Technology also realized that the standard switching medium had to go.

“There must be a better way to do this, because nature has figured out a better way to do this,” he says.

His solution? Nanoclusters of magnetic manganese.

Schneider’s chip contained two slices of superconducting electrodes made out of niobium, which channel electricity with no resistance. When researchers applied different magnetic fields to the synapse, they could control the alignment of the manganese “filling.”

The switch gave the chip a double boost. For one, by aligning the switching medium, the team could predict the ion flow and boost uniformity. For another, the magnetic manganese itself adds computational power. The chip can now encode data in both the level of electrical input and the direction of the magnetisms without bulking up the synapse.

It seriously worked. At one billion times per second, the chips fired several orders of magnitude faster than human neurons. Plus, the chips required just one ten-thousandth of the energy used by their biological counterparts, all the while synthesizing input from nine different sources in an analog manner.

The Road Ahead
These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.

But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

Many of the special materials used in these chips require specific temperatures, he says. Magnetic manganese chips, for example, require temperatures around absolute zero to operate, meaning they come with the need for giant cooling tanks filled with liquid helium—obviously not practical for everyday use.

Another is scalability. Millions of synapses are necessary before a neuromorphic device can be used to tackle everyday problems such as facial recognition. So far, no deal.

But these problems may in fact be a driving force for the entire field. Intense competition could push teams into exploring different ideas and solutions to similar problems, much like these two studies.

If so, future chips may come in diverse flavors. Similar to our vast array of deep learning algorithms and operating systems, the computer chips of the future may also vary depending on specific requirements and needs.

It is worth developing as many different technological approaches as possible, says Furber, especially as neuroscientists increasingly understand what makes our biological synapses—the ultimate inspiration—so amazingly efficient.

Image Credit: arakio / Shutterstock.com Continue reading

Posted in Human Robots

#431689 Robotic Materials Will Distribute ...

The classical view of a robot as a mechanical body with a central “brain” that controls its behavior could soon be on its way out. The authors of a recent article in Science Robotics argue that future robots will have intelligence distributed throughout their bodies.
The concept, and the emerging discipline behind it, are variously referred to as “material robotics” or “robotic materials” and are essentially a synthesis of ideas from robotics and materials science. Proponents say advances in both fields are making it possible to create composite materials capable of combining sensing, actuation, computation, and communication and operating independently of a central processing unit.
Much of the inspiration for the field comes from nature, with practitioners pointing to the adaptive camouflage of the cuttlefish’s skin, the ability of bird wings to morph in response to different maneuvers, or the banyan tree’s ability to grow roots above ground to support new branches.
Adaptive camouflage and morphing wings have clear applications in the defense and aerospace sector, but the authors say similar principles could be used to create everything from smart tires able to calculate the traction needed for specific surfaces to grippers that can tailor their force to the kind of object they are grasping.
“Material robotics represents an acknowledgment that materials can absorb some of the challenges of acting and reacting to an uncertain world,” the authors write. “Embedding distributed sensors and actuators directly into the material of the robot’s body engages computational capabilities and offloads the rigid information and computational requirements from the central processing system.”
The idea of making materials more adaptive is not new, and there are already a host of “smart materials” that can respond to stimuli like heat, mechanical stress, or magnetic fields by doing things like producing a voltage or changing shape. These properties can be carefully tuned to create materials capable of a wide variety of functions such as movement, self-repair, or sensing.
The authors say synthesizing these kinds of smart materials, alongside other advanced materials like biocompatible conductors or biodegradable elastomers, is foundational to material robotics. But the approach also involves integration of many different capabilities in the same material, careful mechanical design to make the most of mechanical capabilities, and closing the loop between sensing and control within the materials themselves.
While there are stand-alone applications for such materials in the near term, like smart fabrics or robotic grippers, the long-term promise of the field is to distribute decision-making in future advanced robots. As they are imbued with ever more senses and capabilities, these machines will be required to shuttle huge amounts of control and feedback data to and fro, placing a strain on both their communication and computation abilities.
Materials that can process sensor data at the source and either autonomously react to it or filter the most relevant information to be passed on to the central processing unit could significantly ease this bottleneck. In a press release related to an earlier study, Nikolaus Correll, an assistant professor of computer science at the University of Colorado Boulder who is also an author of the current paper, pointed out this is a tactic used by the human body.
“The human sensory system automatically filters out things like the feeling of clothing rubbing on the skin,” he said. “An artificial skin with possibly thousands of sensors could do the same thing, and only report to a central ‘brain’ if it touches something new.”
There are still considerable challenges to realizing this vision, though, the authors say, noting that so far the young field has only produced proof of concepts. The biggest challenge remains manufacturing robotic materials in a way that combines all these capabilities in a small enough package at an affordable cost.
Luckily, the authors note, the field can draw on convergent advances in both materials science, such as the development of new bulk materials with inherent multifunctionality, and robotics, such as the ever tighter integration of components.
And they predict that doing away with the prevailing dichotomy of “brain versus body” could lay the foundations for the emergence of “robots with brains in their bodies—the foundation of inexpensive and ubiquitous robots that will step into the real world.”
Image Credit: Anatomy Insider / Shutterstock.com Continue reading

Posted in Human Robots