Tag Archives: could

#439934 New Spiking Neuromorphic Chip Could ...

When it comes to brain computing, timing is everything. It’s how neurons wire up into circuits. It’s how these circuits process highly complex data, leading to actions that can mean life or death. It’s how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption.

To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain’s architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence.

But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What’s missing—yet tantamount to our brain’s inner working—is timing.

This month, a team affiliated with the Human Brain Project, the European Union’s flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions.

“Compared to the abstract neural networks used in deep learning, the more biological archetypes…still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said.

In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech.

To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand … to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said.

Let’s Talk Spikes
At the root of the new algorithm is a fundamental principle in brain computing: spikes.

Let’s take a look at a highly abstracted neuron. It’s like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end.

Here’s the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron.

But neurons don’t just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: ­the timing of when an electrical burst occurs carries a wealth of data. It’s the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing.

So why not adopt the same strategy for neuromorphic computers?

A Spartan Brain-Like Chip
Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.

The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sparse way to encode a neuron’s activity, but comes with perks. Because only the latency to the first time a neuron perks up is used to encode activation, it captures the neuron’s responsiveness without overwhelming a computer with too many data points. In other words, it’s fast, energy-efficient, and easy.

The team next encoded the algorithm onto a neuromorphic chip—the BrainScaleS-2, which roughly emulates simple “neurons” inside its structure, but runs over 1,000 times faster than our biological brains. The platform has over 500 physical artificial neurons, each capable of receiving 256 inputs through configurable synapses, where biological neurons swap, process, and store information.

The setup is a hybrid. “Learning” is achieved on a chip that implements the time-dependent algorithm. However, any updates to the neural circuit—that is, how strongly one neuron connects to another—is achieved through an external workstation, something dubbed “in-the-loop training.”

In a first test, the algorithm was challenged with the “Yin-Yang” task, which requires the algorithm to parse different areas in the traditional Eastern symbol. The algorithm excelled, with an average of 95 percent accuracy.

The team next challenged the setup with a classic deep learning task—MNIST, a dataset of handwritten numbers that revolutionized computer vision. The algorithm excelled again, with nearly 97 percent accuracy. Even more impressive, the BrainScaleS-2 system took less than one second to classify 10,000 test samples, with extremely low relative energy consumption.

Putting these results into context, the team next compared BrainScaleS-2’s performance—armed with the new algorithm—to commercial and other neuromorphic platforms. Take SpiNNaker, a massive, parallel distributed architecture that also mimics neural computing and spikes. The new algorithm was over 100 times faster at image recognition while consuming just a fraction of the power SpiNNaker consumes. Similar results were seen with True North, the harbinger IBM neuromorphic chip.

What Next?
The brain’s two most valuable computing features—energy efficiency and parallel processing—are now heavily inspiring the next generation of computer chips. The goal? Build machines that are as flexible and adaptive as our own brains while using just a fraction of the energy required for our current silicon-based chips.

Yet compared to deep learning, which relies on artificial neural networks, biologically-plausible ones have languished. Part of this, explained Frenkel, is the difficultly of “updating” these circuits through learning. However, with BrainScaleS-2 and a touch of timing data, it’s now possible.

At the same time, having an “external” arbitrator for updating synaptic connections gives the whole system some time to breathe. Neuromorphic hardware, similar to the messiness of our brain computation, is littered with mismatches and errors. With the chip and an external arbitrator, the whole system can learn to adapt to this variability, and eventually compensate for—or even exploit—its quirks for faster and more flexible learning.

For Frenkel, the algorithm’s power lies in its sparseness. The brain, she explained, is powered by sparse codes that “could explain the fast reaction times…such as for visual processing.” Rather than activating entire brain regions, only a few neural networks are needed—like whizzing down empty highways instead of getting stuck in rush hour traffic.

Despite its power, the algorithm still has hiccups. It struggles with interpreting static data, although it excels with time sequences—for example, speech or biosignals. But to Frenkel, it’s the start of a new framework: important information can be encoded with a flexible but simple metric, and generalized to enrich brain- and AI-based data processing with a fraction of the traditional energy costs.

“[It]…may be an important stepping-stone for spiking neuromorphic hardware to finally demonstrate a competitive advantage over conventional neural network approaches,” she said.

Image Credit: Classifying data points in the Yin-Yang dataset, by Göltz and Kriener et al. (Heidelberg / Bern) Continue reading

Posted in Human Robots

#439652 Robot Could Operate a Docking Station ...

Picture, if you will, a cargo rocket launching into space and docking on the International Space Station. The rocket maneuvers up to the station and latches on with an airtight seal so that supplies can be transferred. Now imagine a miniaturized version of that process happening inside your body.
Researchers today announced that they have built a robotic system capable of this kind of supply drop, and which functions entirely inside the gut. The system involves an insulin delivery robot that is surgically implanted in the abdomen, and swallowable magnetic capsules that resupply the robot with insulin.
The robot's developers, based in Italy, tested their system in three diabetic pigs. The system successfully controlled the pigs' blood glucose levels for several hours, according to results published today in the journal Science Robotics.
“Maybe it's scary to think about a docking station inside the body, but it worked,” says Arianna Menciassi, an author of the paper and a professor of biomedical robotics and bioengineering at Sant'Anna School of Advanced Studies in Pisa, Italy.
In her team's system, a device the size of a flip phone is surgically implanted along the abdominal wall interfaced with the small intestine. The device delivers insulin into fluid in that space. When the implant's reservoir runs low on medication, a magnetic, insulin-filled capsule shuttles in to refill it.
Here's how the refill procedure would theoretically work in humans: The patient swallows the capsule just like a pill, and it moves through the digestive system naturally until it reaches a section of the small intestine where the implant has been placed. Using magnetic fields, the implant draws the capsule toward it, rotates it, and docks it in the correct position. The implant then punches the capsule with a retractable needle and pumps the insulin into its reservoir. The needle must also punch through a thin layer of intestinal tissue to reach the capsule.
In all, the implant contains four actuators that control the docking, needle punching, reservoir volume and aspiration, and pump. The motor responsible for docking rotates a magnet to maneuver the capsule into place. The design was inspired by industrial clamping systems and pipe-inspecting robots, the authors say.
After the insulin is delivered, the implant releases the capsule, allowing it to continue naturally through the digestive tract to be excreted from the body. The magnetic fields that control docking and release of the capsule are controlled wirelessly by an external programming device, and can be turned on or off. The implant's battery is wirelessly charged by an external device.
This kind of delivery system could prove useful to people with type 1 diabetes, especially those who must inject insulin into their bodies multiple times a day.
This kind of delivery system could prove useful to people with type 1 diabetes, especially those who must inject insulin into their bodies multiple times a day. Insulin pumps are available commercially, but these require external hardware that deliver the drug through a tube or needle that penetrates the body. Implantable insulin pumps are also available, but those devices have to be refilled by a tube that protrudes from the body, inviting bacterial infections; those systems have not proven popular.
A fully implantable system refilled by a pill would eliminate the need for protruding tubes and hardware, says Menciassi. Such a system could prove useful in delivering drugs for other diseases too, such as chemotherapy to people with ovarian, pancreatic, gastric, and colorectal cancers, the authors report.
As a next step, the authors are working on sealing the implanted device more robustly. “We observed in some pigs that [bodily] fluids are entering inside the robot,” says Menciassi. Some of the leaks are likely occurring during docking when the needle comes out of the implant, she says. The leaks did not occur when the team previously tested the device in water, but the human body, she notes, is much more complex. Continue reading

Posted in Human Robots

#439628 How a Simple Crystal Could Help Pave the ...

Vaccine and drug development, artificial intelligence, transport and logistics, climate science—these are all areas that stand to be transformed by the development of a full-scale quantum computer. And there has been explosive growth in quantum computing investment over the past decade.

Yet current quantum processors are relatively small in scale, with fewer than 100 qubits— the basic building blocks of a quantum computer. Bits are the smallest unit of information in computing, and the term qubits stems from “quantum bits.”

While early quantum processors have been crucial for demonstrating the potential of quantum computing, realizing globally significant applications will likely require processors with upwards of a million qubits.

Our new research tackles a core problem at the heart of scaling up quantum computers: how do we go from controlling just a few qubits, to controlling millions? In research published today in Science Advances, we reveal a new technology that may offer a solution.

What Exactly Is a Quantum Computer?
Quantum computers use qubits to hold and process quantum information. Unlike the bits of information in classical computers, qubits make use of the quantum properties of nature, known as “superposition” and “entanglement,” to perform some calculations much faster than their classical counterparts.

Unlike a classical bit, which is represented by either 0 or 1, a qubit can exist in two states (that is, 0 and 1) at the same time. This is what we refer to as a superposition state.

Demonstrations by Google and others have shown even current, early-stage quantum computers can outperform the most powerful supercomputers on the planet for a highly specialized (albeit not particularly useful) task—reaching a milestone we call quantum supremacy.

Google’s quantum computer, built from superconducting electrical circuits, had just 53 qubits and was cooled to a temperature close to -273℃ in a high-tech refrigerator. This extreme temperature is needed to remove heat, which can introduce errors to the fragile qubits. While such demonstrations are important, the challenge now is to build quantum processors with many more qubits.

Major efforts are underway at UNSW Sydney to make quantum computers from the same material used in everyday computer chips: silicon. A conventional silicon chip is thumbnail-sized and packs in several billion bits, so the prospect of using this technology to build a quantum computer is compelling.

The Control Problem
In silicon quantum processors, information is stored in individual electrons, which are trapped beneath small electrodes at the chip’s surface. Specifically, the qubit is coded into the electron’s spin. It can be pictured as a small compass inside the electron. The needle of the compass can point north or south, which represents the 0 and 1 states.

To set a qubit in a superposition state (both 0 and 1), an operation that occurs in all quantum computations, a control signal must be directed to the desired qubit. For qubits in silicon, this control signal is in the form of a microwave field, much like the ones used to carry phone calls over a 5G network. The microwaves interact with the electron and cause its spin (compass needle) to rotate.

Currently, each qubit requires its own microwave control field. It is delivered to the quantum chip through a cable running from room temperature down to the bottom of the refrigerator at close to -273 degrees Celsius. Each cable brings heat with it, which must be removed before it reaches the quantum processor.

At around 50 qubits, which is state-of-the-art today, this is difficult but manageable. Current refrigerator technology can cope with the cable heat load. However, it represents a huge hurdle if we’re to use systems with a million qubits or more.

The Solution Is ‘Global’ Control
An elegant solution to the challenge of how to deliver control signals to millions of spin qubits was proposed in the late 1990s. The idea of “global control” was simple: broadcast a single microwave control field across the entire quantum processor.

Voltage pulses can be applied locally to qubit electrodes to make the individual qubits interact with the global field (and produce superposition states).

It’s much easier to generate such voltage pulses on-chip than it is to generate multiple microwave fields. The solution requires only a single control cable and removes obtrusive on-chip microwave control circuitry.

For more than two decades global control in quantum computers remained an idea. Researchers could not devise a suitable technology that could be integrated with a quantum chip and generate microwave fields at suitably low powers.

In our work we show that a component known as a dielectric resonator could finally allow this. The dielectric resonator is a small, transparent crystal which traps microwaves for a short period of time.

The trapping of microwaves, a phenomenon known as resonance, allows them to interact with the spin qubits longer and greatly reduces the power of microwaves needed to generate the control field. This was vital to operating the technology inside the refrigerator.

In our experiment, we used the dielectric resonator to generate a control field over an area that could contain up to four million qubits. The quantum chip used in this demonstration was a device with two qubits. We were able to show the microwaves produced by the crystal could flip the spin state of each one.

The Path to a Full-Scale Quantum Computer
There is still work to be done before this technology is up to the task of controlling a million qubits. For our study, we managed to flip the state of the qubits, but not yet produce arbitrary superposition states.

Experiments are ongoing to demonstrate this critical capability. We’ll also need to further study the impact of the dielectric resonator on other aspects of the quantum processor.

That said, we believe these engineering challenges will ultimately be surmountable— clearing one of the greatest hurdles to realizing a large-scale spin-based quantum computer.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Serwan Asaad/UNSW, Author provided Continue reading

Posted in Human Robots

#439622 Robot Could Operate a Docking Station ...

Picture, if you will, a cargo rocket launching into space and docking on the International Space Station. The rocket maneuvers up to the station and latches on with an airtight seal so that supplies can be transferred. Now imagine a miniaturized version of that process happening inside your body.
Researchers today announced that they have built a robotic system capable of this kind of supply drop, and which functions entirely inside the gut. The system involves an insulin delivery robot that is surgically implanted in the abdomen, and swallowable magnetic capsules that resupply the robot with insulin.
The robot's developers, based in Italy, tested their system in three diabetic pigs. The system successfully controlled the pigs' blood glucose levels for several hours, according to results published today in the journal Science Robotics.
“Maybe it's scary to think about a docking station inside the body, but it worked,” says Arianna Menciassi, an author of the paper and a professor of biomedical robotics and bioengineering at Sant'Anna School of Advanced Studies in Pisa, Italy.
In her team's system, a device the size of a flip phone is surgically implanted along the abdominal wall interfaced with the small intestine. The device delivers insulin into fluid in that space. When the implant's reservoir runs low on medication, a magnetic, insulin-filled capsule shuttles in to refill it.
Here's how the refill procedure would theoretically work in humans: The patient swallows the capsule just like a pill, and it moves through the digestive system naturally until it reaches a section of the small intestine where the implant has been placed. Using magnetic fields, the implant draws the capsule toward it, rotates it, and docks it in the correct position. The implant then punches the capsule with a retractable needle and pumps the insulin into its reservoir. The needle must also punch through a thin layer of intestinal tissue to reach the capsule.
In all, the implant contains four actuators that control the docking, needle punching, reservoir volume and aspiration, and pump. The motor responsible for docking rotates a magnet to maneuver the capsule into place. The design was inspired by industrial clamping systems and pipe-inspecting robots, the authors say.
After the insulin is delivered, the implant releases the capsule, allowing it to continue naturally through the digestive tract to be excreted from the body. The magnetic fields that control docking and release of the capsule are controlled wirelessly by an external programming device, and can be turned on or off. The implant's battery is wirelessly charged by an external device.
This kind of delivery system could prove useful to people with type 1 diabetes, especially those who must inject insulin into their bodies multiple times a day.
This kind of delivery system could prove useful to people with type 1 diabetes, especially those who must inject insulin into their bodies multiple times a day. Insulin pumps are available commercially, but these require external hardware that deliver the drug through a tube or needle that penetrates the body. Implantable insulin pumps are also available, but those devices have to be refilled by a tube that protrudes from the body, inviting bacterial infections; those systems have not proven popular.
A fully implantable system refilled by a pill would eliminate the need for protruding tubes and hardware, says Menciassi. Such a system could prove useful in delivering drugs for other diseases too, such as chemotherapy to people with ovarian, pancreatic, gastric, and colorectal cancers, the authors report.
As a next step, the authors are working on sealing the implanted device more robustly. “We observed in some pigs that [bodily] fluids are entering inside the robot,” says Menciassi. Some of the leaks are likely occurring during docking when the needle comes out of the implant, she says. The leaks did not occur when the team previously tested the device in water, but the human body, she notes, is much more complex. Continue reading

Posted in Human Robots

#439589 Tiny ‘maniac’ robots could ...

Would you let a tiny MANiAC travel around your nervous system to treat you with drugs? You may be inclined to say no, but in the future, “magnetically aligned nanorods in alginate capsules” (MANiACs) may be part of an advanced arsenal of drug delivery technologies at doctors' disposal. A recent study in Frontiers in Robotics and AI is the first to investigate how such tiny robots might perform as drug delivery vehicles in neural tissue. The study finds that when controlled using a magnetic field, the tiny tumbling soft robots can move against fluid flow, climb slopes and move about neural tissues, such as the spinal cord, and deposit substances at precise locations. Continue reading

Posted in Human Robots