Tag Archives: for

#439164 Advancing AI With a Supercomputer: A ...

Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.

How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.

Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.

Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.

The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.

Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.

The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.

It’s not a new idea, but so far getting our best electronic and optical hardware to gel has proven incredibly tough. The team thinks they’ve found a potential workaround, dropping the temperature of the system to negative 450 degrees Fahrenheit.

While that might seem to only complicate matters, it actually opens up a host of new hardware possibilities. There are a bunch of high-performance electronic and optical components that only work at these frigid temperatures, like superconducting electronics, single-photon detectors, and silicon LEDs.

The researchers propose using these components to build artificial neurons that operate more like their biological cousins than conventional computer components, firing off electrical impulses, or spikes, rather than shuttling numbers around.

Each neuron has thousands of artificial synapses made from single photon detectors, which pick up optical messages from other neurons. These incoming signals are combined and processed by superconducting circuits, and once they cross a certain threshold a silicon LED is activated, sending an optical impulse to all downstream neurons.

The researchers envisage combining millions of these neurons on 300-millimeter silicon wafers and then stacking the wafers to create a highly interconnected network that mimics the architecture of the brain, with short-range connections dealt with by optical waveguides on each chip and long-range ones dealt with by fiber optic cables.

They acknowledge that the need to cryogenically cool the entire device is a challenge. But they say the improved power efficiency and that of their design should cancel out the cost of this cooling, and a system on the scale of the human brain should require no more power or space than a modern supercomputer. They also point out that there is significant R&D going into cryogenically-cooled quantum computers, which they could likely piggyback off of.

Some of the basic components of the system have already been experimentally demonstrated by the researchers, though they admit there’s still a long way to go to put all the pieces together. While many of these components are compatible with standard electronics fabrication, finding ways to manufacture them cheaply and integrate them will be a mammoth task.

Perhaps more important is the question of what kind of software the machine would run. It’s designed to implement “spiking neural networks” similar to those found in the brain, but our understanding of biological neural networks is still rudimentary, and our ability to mimic them is even worse. While both scientists and tech companies have been experimenting with the approach, it is still far less capable than deep learning.

Given the enormous engineering challenge involved in building a device of this scale, it may be a while before this blueprint makes it off the drawing board. But the proposal is an intriguing new chapter in the hunt for artificial general intelligence.

Image Credit: InspiredImages from Pixabay Continue reading

Posted in Human Robots

#437386 Scary A.I. more intelligent than you

GPT-3 (Generative Pre-trained Transformer 3), is an artificial intelligence language generator that uses deep learning to produce human-like output. The high quality of its text is very difficult to distinguish from a human’s. Many scientists, researchers and engineers (including Stephen … Continue reading

Posted in Human Robots

#439141 Protected: 3 Ways to Utilize Artificial ...

There is no excerpt because this is a protected post.

The post Protected: 3 Ways to Utilize Artificial Intelligence for Vehicles appeared first on TFOT. Continue reading

Posted in Human Robots

#439110 Robotic Exoskeletons Could One Day Walk ...

Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.

Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.

One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.

Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.

Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.

Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.

According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.

In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”

In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .

Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.

However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading

Posted in Human Robots

#438285 Untethered robots that are better than ...

“Atlas” and “Handle” are just two of the amazing AI robots in the arsenal of Boston Dynamics. Atlas is an untethered whole-body humanoid with human-level dexterity. Handle is the guy for moving boxes in the warehouse. It can also unload … Continue reading

Posted in Human Robots