Tag Archives: chip

#439934 New Spiking Neuromorphic Chip Could ...

When it comes to brain computing, timing is everything. It’s how neurons wire up into circuits. It’s how these circuits process highly complex data, leading to actions that can mean life or death. It’s how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption.

To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain’s architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence.

But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What’s missing—yet tantamount to our brain’s inner working—is timing.

This month, a team affiliated with the Human Brain Project, the European Union’s flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions.

“Compared to the abstract neural networks used in deep learning, the more biological archetypes…still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said.

In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech.

To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand … to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said.

Let’s Talk Spikes
At the root of the new algorithm is a fundamental principle in brain computing: spikes.

Let’s take a look at a highly abstracted neuron. It’s like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end.

Here’s the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron.

But neurons don’t just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: ­the timing of when an electrical burst occurs carries a wealth of data. It’s the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing.

So why not adopt the same strategy for neuromorphic computers?

A Spartan Brain-Like Chip
Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.

The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sparse way to encode a neuron’s activity, but comes with perks. Because only the latency to the first time a neuron perks up is used to encode activation, it captures the neuron’s responsiveness without overwhelming a computer with too many data points. In other words, it’s fast, energy-efficient, and easy.

The team next encoded the algorithm onto a neuromorphic chip—the BrainScaleS-2, which roughly emulates simple “neurons” inside its structure, but runs over 1,000 times faster than our biological brains. The platform has over 500 physical artificial neurons, each capable of receiving 256 inputs through configurable synapses, where biological neurons swap, process, and store information.

The setup is a hybrid. “Learning” is achieved on a chip that implements the time-dependent algorithm. However, any updates to the neural circuit—that is, how strongly one neuron connects to another—is achieved through an external workstation, something dubbed “in-the-loop training.”

In a first test, the algorithm was challenged with the “Yin-Yang” task, which requires the algorithm to parse different areas in the traditional Eastern symbol. The algorithm excelled, with an average of 95 percent accuracy.

The team next challenged the setup with a classic deep learning task—MNIST, a dataset of handwritten numbers that revolutionized computer vision. The algorithm excelled again, with nearly 97 percent accuracy. Even more impressive, the BrainScaleS-2 system took less than one second to classify 10,000 test samples, with extremely low relative energy consumption.

Putting these results into context, the team next compared BrainScaleS-2’s performance—armed with the new algorithm—to commercial and other neuromorphic platforms. Take SpiNNaker, a massive, parallel distributed architecture that also mimics neural computing and spikes. The new algorithm was over 100 times faster at image recognition while consuming just a fraction of the power SpiNNaker consumes. Similar results were seen with True North, the harbinger IBM neuromorphic chip.

What Next?
The brain’s two most valuable computing features—energy efficiency and parallel processing—are now heavily inspiring the next generation of computer chips. The goal? Build machines that are as flexible and adaptive as our own brains while using just a fraction of the energy required for our current silicon-based chips.

Yet compared to deep learning, which relies on artificial neural networks, biologically-plausible ones have languished. Part of this, explained Frenkel, is the difficultly of “updating” these circuits through learning. However, with BrainScaleS-2 and a touch of timing data, it’s now possible.

At the same time, having an “external” arbitrator for updating synaptic connections gives the whole system some time to breathe. Neuromorphic hardware, similar to the messiness of our brain computation, is littered with mismatches and errors. With the chip and an external arbitrator, the whole system can learn to adapt to this variability, and eventually compensate for—or even exploit—its quirks for faster and more flexible learning.

For Frenkel, the algorithm’s power lies in its sparseness. The brain, she explained, is powered by sparse codes that “could explain the fast reaction times…such as for visual processing.” Rather than activating entire brain regions, only a few neural networks are needed—like whizzing down empty highways instead of getting stuck in rush hour traffic.

Despite its power, the algorithm still has hiccups. It struggles with interpreting static data, although it excels with time sequences—for example, speech or biosignals. But to Frenkel, it’s the start of a new framework: important information can be encoded with a flexible but simple metric, and generalized to enrich brain- and AI-based data processing with a fraction of the traditional energy costs.

“[It]…may be an important stepping-stone for spiking neuromorphic hardware to finally demonstrate a competitive advantage over conventional neural network approaches,” she said.

Image Credit: Classifying data points in the Yin-Yang dataset, by Göltz and Kriener et al. (Heidelberg / Bern) Continue reading

Posted in Human Robots

#439674 Cerebras Upgrades Trillion-Transistor ...

Much of the recent progress in AI has come from building ever-larger neural networks. A new chip powerful enough to handle “brain-scale” models could turbo-charge this approach.

Chip startup Cerebras leaped into the limelight in 2019 when it came out of stealth to reveal a 1.2-trillion-transistor chip. The size of a dinner plate, the chip is called the Wafer Scale Engine and was the world’s largest computer chip. Earlier this year Cerebras unveiled the Wafer Scale Engine 2 (WSE-2), which more than doubled the number of transistors to 2.6 trillion.

Now the company has outlined a series of innovations that mean its latest chip can train a neural network with up to 120 trillion parameters. For reference, OpenAI’s revolutionary GPT-3 language model contains 175 billion parameters. The largest neural network to date, which was trained by Google, had 1.6 trillion.

“Larger networks, such as GPT-3, have already transformed the natural language processing landscape, making possible what was previously unimaginable,” said Cerebras CEO and co-founder Andrew Feldman in a press release.

“The industry is moving past 1 trillion parameter models, and we are extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120 trillion parameters.”

The genius of Cerebras’ approach is that rather than taking a silicon wafer and splitting it up to make hundreds of smaller chips, it makes a single massive one. While your average GPU will have a few hundred cores, the WSE-2 has 850,000. Because they’re all on the same hunk of silicon, they can work together far more seamlessly.

This makes the chip ideal for tasks that require huge numbers of operations to happen in parallel, which includes both deep learning and various supercomputing applications. And earlier this week at the Hotchips conference, the company unveiled new technology that is pushing the WSE-2’s capabilities even further.

A major challenge for large neural networks is shuttling around all the data involved in their calculations. Most chips have a limited amount of memory on-chip, and every time data has to be shuffled in and out it creates a bottleneck, which limits the practical size of networks.

The WSE-2 already has an enormous 40 gigabytes of on-chip memory, which means it can hold even the largest of today’s networks. But the company has also built an external unit called MemoryX that provides up to 2.4 Petabytes of high-performance memory, which is so tightly integrated it behaves as if it were on-chip.

Cerebras has also revamped its approach to that data it shuffles around. Previously the guts of the neural network would be stored on the chip, and only the training data would be fed in. Now, though, the weights of the connections between the network’s neurons are kept in the MemoryX unit and streamed in during training.

By combining these two innovations, the company says, they can train networks two orders of magnitude larger than anything that exists today. Other advances announced at the same time include the ability to run extremely sparse (and therefore efficient) neural networks, and a new communication system dubbed SwarmX that makes it possible to link up to 192 chips to create a combined total of 163 million cores.

How much all this cutting-edge technology will cost and who is in a position to take advantage of it is unclear. “This is highly specialized stuff,” Mike Demler, a senior analyst with the Linley Group, told Wired. “It only makes sense for training the very largest models.”

While the size of AI models has been increasing rapidly, it’s likely to be years before anyone can push the WSE-2 to its limits. And despite the insinuations in Cerebras’ press material, just because the parameter count roughly matches the number of synapses in the brain, that doesn’t mean the new chip will be able to run models anywhere close to its complexity or performance.

There’s a major debate in AI circles today over whether we can achieve general artificial intelligence by simply building larger neural networks, or this will require new theoretical breakthroughs. So far, increasing parameter counts has led to pretty consistent jumps in performance. A two-order-of-magnitude improvement over today’s largest models would undoubtedly be significant.

It’s still far from clear whether that trend will hold out, but Cerebras’ new chip could get us considerably closer to an answer.

Image Credit: Cerebras Continue reading

Posted in Human Robots

#439168 The World’s Biggest AI Chip Now Comes ...

The world’s biggest AI chip just doubled its specs—without adding an inch.

The Cerebras Systems Wafer Scale Engine is about the size of a big dinner plate. All that surface area enables a lot more of everything, from processors to memory. The first WSE chip, released in 2019, had an incredible 1.2 trillion transistors and 400,000 processing cores. Its successor doubles everything, except its physical size.

The WSE-2 crams in 2.6 trillion transistors and 850,000 cores on the same dinner plate. Its on-chip memory has increased from 18 gigabytes to 40 gigabytes, and the rate it shuttles information to and from said memory has gone from 9 petabytes per second to 20 petabytes per second.

It’s a beast any way you slice it.

The WSE-2 is manufactured by Taiwan Semiconductor Manufacturing Company (TSMC), and it was a jump from TSMC’s 16-nanometer chipmaking process to its 7-nanometer process—skipping the 10-nanometer node—that enabled most of the WSE-2’s gains.

This required changes to the physical design of the chip, but Cerebras says they also made improvements to each core above and beyond what was needed to make the new process work. The updated mega-chip should be a lot faster and more efficient.

Why Make Giant Computer Chips?
While graphics processing units (GPUs) still reign supreme in artificial intelligence, they weren’t made for AI in particular. Rather, GPUs were first developed and used for graphics-heavy applications like gaming.

They’ve done amazing things for AI and supercomputing, but in the last several years, specialized chips made for AI are on the up and up.

Cerebras is one of the contenders, alongside other up-and-comers like Graphcore and SambaNova and more familiar names like Intel and NVIDIA.

The company likes to compare the WSE-2 to a top AI processor (NVIDIA’s A100) to underscore just how different it is from the competition. The A100 has two percent the number of transistors (54.2 billion) occupying a little under two percent the surface area. It’s much smaller, but the A100’s might is more fully realized when hundreds or thousands of chips are linked together in a larger system.

In contrast, the WSE-2 reduces the cost and complexity of linking all those chips together by jamming as much processing and memory as possible onto a single wafer of silicon. At the same time, removing the need to move data between lots of chips spread out on multiple server racks dramatically increases speed and efficiency.

The chip’s design gives its small, speedy cores their own dedicated memory and facilitates quick communication between cores. And Cerebras’s compiling software works with machine learning models using standard frameworks—like PyTorch and TensorFlow—to make distributing tasks among the chip’s cores fairly painless.

The approach is called wafer-scale computing because the chip is the size of a standard silicon wafer from which many chips are normally cut. Wafer-scale computing has been on the radar for years, but Cerebras is the first to make a commercially viable chip.

The chip comes packaged in a computer system called the CS-2. The system includes cooling and power supply and fits in about a third of a standard server rack.

After the startup announced the chip in 2019, it began working with a growing list of customers. Cerebras counts GlaxoSmithKline, Lawrence Livermore National Lab, and Argonne National (among others) as customers alongside unnamed partners in pharmaceuticals, biotech, manufacturing, and the military. Many applications have been in AI, but not all. Last year, the company said the National Energy Technology Laboratory (NETL) used the chip to outpace a supercomputer in a simulation of fluid dynamics.

Will Wafer-Scale Go Big?
Whether wafer-scale computing catches on remains to be seen.

Cerebras says their chip significantly speeds up machine learning tasks, and testimony from early customers—some of which claim pretty big gains—supports this. But there aren’t yet independent head-to-head comparisons. Neither Cerebras nor most other AI hardware startups, for example, took part in a recent MLperf benchmark test of AI systems. (The top systems nearly all used NVIDIA GPUs to accelerate their algorithms.)

According to IEEE Spectrum, Cerebras says they’d rather let interested buyers test the system on their own specific neural networks as opposed to selling them on a more general and potentially less applicable benchmark. This isn’t an uncommon approach AI analyst Karl Freund said, “Everybody runs their own models that they developed for their own business. That’s the only thing that matters to buyers.”

It’s also worth noting the WSE can only handle tasks small enough to fit on its chip. The company says most suitable problems it’s encountered can fit, and the WSE-2 delivers even more space. Still, the size of machine learning algorithms is growing rapidly. Which is perhaps why Cerebras is keen to note that two or even three CS-2’s can fit into a server cabinet.

Ultimately, the WSE-2 doesn’t make sense for smaller tasks in which one or a few GPUs will do the trick. At the moment the chip is being used in large, compute-heavy projects in science and research. Current applications include cancer research and drug discovery, gravity wave detection, and fusion simulation. Cerebras CEO and cofounder Andrew Feldman says it may also be made available to customers with shorter-term, less intensive needs on the cloud.

The market for the chip is niche, but Feldman told HPC Wire it’s bigger than he anticipated in 2015, and it’s still growing as new approaches to AI are continually popping up. “The market is moving unbelievably quickly,” he said.

The increasing competition between AI chips is worth watching. There may end up being several fit-to-purpose approaches or one that rises to the top.

For the moment, at least, it appears there’s some appetite for a generous helping of giant computer chips.

Image Credit: Cerebras Continue reading

Posted in Human Robots

#439042 How Scientists Used Ultrasound to Read ...

Thanks to neural implants, mind reading is no longer science fiction.

As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!

Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.

Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.

In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.

“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.

To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.

Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.

Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.

“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.

There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?

One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.

The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.

The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.

Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.

The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.

Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.

That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.

Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.

A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.

There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.

All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.

“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.

Image Credit: Free-Photos / Pixabay Continue reading

Posted in Human Robots

#438982 Quantum Computing and Reinforcement ...

Deep reinforcement learning is having a superstar moment.

Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.

Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.

But what if you could try multiple solutions at once?

This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.

The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.

Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.

The Bottleneck
Learning from trial and error comes intuitively to our brains.

Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)

Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.

“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.

Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.

The Hybrid AI
The new study went straight for that previously untenable jugular.

The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.

At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.

This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.

“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.

It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.

The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.

The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.

In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”

A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.

The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.

Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.

“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.

Image Credit: Oleg Gamulinskiy from Pixabay Continue reading

Posted in Human Robots