Tag Archives: ai

#439831 Tesla’s Tesla Bot

Here’s Elon Musk hyping “Tesla Bot”, said to be a general purpose, bi-pedal, humanoid robot that will perform “unsafe, repetitive or boring” tasks.

Posted in Human Robots

#439934 New Spiking Neuromorphic Chip Could ...

When it comes to brain computing, timing is everything. It’s how neurons wire up into circuits. It’s how these circuits process highly complex data, leading to actions that can mean life or death. It’s how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption.

To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain’s architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence.

But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What’s missing—yet tantamount to our brain’s inner working—is timing.

This month, a team affiliated with the Human Brain Project, the European Union’s flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions.

“Compared to the abstract neural networks used in deep learning, the more biological archetypes…still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said.

In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech.

To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand … to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said.

Let’s Talk Spikes
At the root of the new algorithm is a fundamental principle in brain computing: spikes.

Let’s take a look at a highly abstracted neuron. It’s like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end.

Here’s the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron.

But neurons don’t just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: ­the timing of when an electrical burst occurs carries a wealth of data. It’s the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing.

So why not adopt the same strategy for neuromorphic computers?

A Spartan Brain-Like Chip
Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.

The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sparse way to encode a neuron’s activity, but comes with perks. Because only the latency to the first time a neuron perks up is used to encode activation, it captures the neuron’s responsiveness without overwhelming a computer with too many data points. In other words, it’s fast, energy-efficient, and easy.

The team next encoded the algorithm onto a neuromorphic chip—the BrainScaleS-2, which roughly emulates simple “neurons” inside its structure, but runs over 1,000 times faster than our biological brains. The platform has over 500 physical artificial neurons, each capable of receiving 256 inputs through configurable synapses, where biological neurons swap, process, and store information.

The setup is a hybrid. “Learning” is achieved on a chip that implements the time-dependent algorithm. However, any updates to the neural circuit—that is, how strongly one neuron connects to another—is achieved through an external workstation, something dubbed “in-the-loop training.”

In a first test, the algorithm was challenged with the “Yin-Yang” task, which requires the algorithm to parse different areas in the traditional Eastern symbol. The algorithm excelled, with an average of 95 percent accuracy.

The team next challenged the setup with a classic deep learning task—MNIST, a dataset of handwritten numbers that revolutionized computer vision. The algorithm excelled again, with nearly 97 percent accuracy. Even more impressive, the BrainScaleS-2 system took less than one second to classify 10,000 test samples, with extremely low relative energy consumption.

Putting these results into context, the team next compared BrainScaleS-2’s performance—armed with the new algorithm—to commercial and other neuromorphic platforms. Take SpiNNaker, a massive, parallel distributed architecture that also mimics neural computing and spikes. The new algorithm was over 100 times faster at image recognition while consuming just a fraction of the power SpiNNaker consumes. Similar results were seen with True North, the harbinger IBM neuromorphic chip.

What Next?
The brain’s two most valuable computing features—energy efficiency and parallel processing—are now heavily inspiring the next generation of computer chips. The goal? Build machines that are as flexible and adaptive as our own brains while using just a fraction of the energy required for our current silicon-based chips.

Yet compared to deep learning, which relies on artificial neural networks, biologically-plausible ones have languished. Part of this, explained Frenkel, is the difficultly of “updating” these circuits through learning. However, with BrainScaleS-2 and a touch of timing data, it’s now possible.

At the same time, having an “external” arbitrator for updating synaptic connections gives the whole system some time to breathe. Neuromorphic hardware, similar to the messiness of our brain computation, is littered with mismatches and errors. With the chip and an external arbitrator, the whole system can learn to adapt to this variability, and eventually compensate for—or even exploit—its quirks for faster and more flexible learning.

For Frenkel, the algorithm’s power lies in its sparseness. The brain, she explained, is powered by sparse codes that “could explain the fast reaction times…such as for visual processing.” Rather than activating entire brain regions, only a few neural networks are needed—like whizzing down empty highways instead of getting stuck in rush hour traffic.

Despite its power, the algorithm still has hiccups. It struggles with interpreting static data, although it excels with time sequences—for example, speech or biosignals. But to Frenkel, it’s the start of a new framework: important information can be encoded with a flexible but simple metric, and generalized to enrich brain- and AI-based data processing with a fraction of the traditional energy costs.

“[It]…may be an important stepping-stone for spiking neuromorphic hardware to finally demonstrate a competitive advantage over conventional neural network approaches,” she said.

Image Credit: Classifying data points in the Yin-Yang dataset, by Göltz and Kriener et al. (Heidelberg / Bern) Continue reading

Posted in Human Robots

#439884 This Spooky, Bizarre Haunted House Was ...

AI is slowly getting more creative, and as it does it’s raising questions about the nature of creativity itself, who owns works of art made by computers, and whether conscious machines will make art humans can understand. In the spooky spirit of Halloween, one engineer used an AI to produce a very specific, seasonal kind of “art”: a haunted house.

It’s not a brick-and-mortar house you can walk through, unfortunately; like so many things these days, it’s virtual, and was created by research scientist and writer Janelle Shane. Shane runs a machine learning humor blog called AI Weirdness where she writes about the “sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.”

For the virtual haunted house, Shane used CLIP, a neural network built by OpenAI, and VQGAN, a neural network architecture that combines convolutional neural networks (which are typically used for images) with transformers (which are typically used for language).

CLIP (short for Contrastive Language–Image Pre-training) learns visual concepts from natural language supervision, using images and their descriptions to rate how well a given image matches a phrase. The algorithm uses zero-shot learning, a training methodology that decreases reliance on labeled data and enables the model to eventually recognize objects or images it hasn’t seen before.

The phrase Shane focused on for this experiment was “haunted Victorian house,” starting with a photo of a regular Victorian house then letting the AI use its feedback to modify the image with details it associated with the word “haunted.”

Image Credit: Josephyurko, cc-by SA 4.0
The results are somewhat ghoulish, though also perplexing. In the first iteration, the home’s wood has turned to stone, the windows are covered in something that could be cobwebs, the cloudy sky has a dramatic tilt to it, and there appears to be fire on the house’s lower level.

Image Credit: Janelle Shane, AI Weirdness
Shane then upped the ante and instructed the model to create an “extremely haunted” Victorian house. The second iteration looks a little more haunted, but also a little less like a house in general, partly because there appears to be a piece of night sky under the house’s roof near its center.

Image Credit: Janelle Shane, AI Weirdness
Shane then tried taking the word “haunted” out of the instructions, and things just got more bizarre from there. She wrote in her blog post about the project, “Apparently CLIP has learned that if you want to make things less haunted, add flowers, street lights, and display counters full of snacks.”

Image Credit: Janelle Shane, AI Weirdness
“All the AI’s changes tend to make the house make less sense,” Shane said. “That’s because it’s easier for it to look at tiny details like mist than the big picture like how a house fits together. In a lot of what AI does, it’s working on the level of surface details rather than deeper meaning.”

Shane’s description matches up with where AI stands as a field. Despite impressive progress in fields like protein folding, RNA structure, natural language processing, and more, AI has not yet approached “general intelligence” and is still very much in the “narrow” domain. Researcher Melanie Mitchell argues that common fallacies in the field, like using human language to describe machine intelligence, are hampering its advancement; computers don’t really “learn” or “understand” in the way humans do, and adjusting the language we used to describe AI systems could help do away with some of the misunderstandings around their capabilities.

Shane’s haunted house is a clear example of this lack of understanding, and a playful reminder that we should move cautiously in allowing machines to make decisions with real-world impact.

Banner Image Credit: Janelle Shane, AI Weirdness Continue reading

Posted in Human Robots

#439875 Not So Mysterious After All: Researchers ...

The deep learning neural networks at the heart of modern artificial intelligence are often described as “black boxes” whose inner workings are inscrutable. But new research calls that idea into question, with significant implications for privacy.

Unlike traditional software whose functions are predetermined by a developer, neural networks learn how to process or analyze data by training on examples. They do this by continually adjusting the strength of the links between their many neurons.

By the end of this process, the way they make decisions is tied up in a tangled network of connections that can be impossible to follow. As a result, it’s often assumed that even if you have access to the model itself, it’s more or less impossible to work out the data that the system was trained on.

But a pair of recent papers have brought this assumption into question, according to MIT Technology Review, by showing that two very different techniques can be used to identify the data a model was trained on. This could have serious implications for AI systems trained on sensitive information like health records or financial data.

The first approach takes aim at generative adversarial networks (GANs), the AI systems behind deepfake images. These systems are increasingly being used to create synthetic faces that are supposedly completely unrelated to real people.

But researchers from the University of Caen Normandy in France showed that they could easily link generated faces from a popular model to real people whose data had been used to train the GAN. They did this by getting a second facial recognition model to compare the generated faces against training samples to spot if they shared the same identity.

The images aren’t an exact match, as the GAN has modified them, but the researchers found multiple examples where generated faces were clearly linked to images in the training data. In a paper describing the research, they point out that in many cases the generated face is simply the original face in a different pose.

While the approach is specific to face-generation GANs, the researchers point out that similar ideas could be applied to things like biometric data or medical images. Another, more general approach to reverse engineering neural nets could do that straight off the bat, though.

A group from Nvidia has shown that they can infer the data the model was trained on without even seeing any examples of the trained data. They used an approach called model inversion, which effectively runs the neural net in reverse. This technique is often used to analyze neural networks, but using it to recover the input data had only been achieved on simple networks under very specific sets of assumptions.

In a recent paper, the researchers described how they were able to scale the approach to large networks by splitting the problem up and carrying out inversions on each of the networks’ layers separately. With this approach, they were able to recreate training data images using nothing but the models themselves.

While carrying out either attack is a complex process that requires intimate access to the model in question, both highlight the fact that AIs may not be the black boxes we thought they were, and determined attackers could extract potentially sensitive information from them.

Given that it’s becoming increasingly easy to reverse engineer someone else’s model using your own AI, the requirement to have access to the neural network isn’t even that big of a barrier.

The problem isn’t restricted to image-based algorithms. Last year, researchers from a consortium of tech companies and universities showed that they could extract news headlines, JavaScript code, and personally identifiable information from the large language model GPT-2.

These issues are only going to become more pressing as AI systems push their way into sensitive areas like health, finance, and defense. There are some solutions on the horizon, such as differential privacy, where models are trained on the statistical features of aggregated data rather than individual data points, or homomorphic encryption, an emerging paradigm that makes it possible to compute directly on encrypted data.

But these approaches are still a long way from being standard practice, so for the time being, entrusting your data to the black box of AI may not be as safe as you think.

Image Credit: Connect world / Shutterstock.com Continue reading

Posted in Human Robots

#439842 AI-Powered Brain Implant Eases Severe ...

Sarah hadn’t laughed in five years.

At 36 years old, the avid home cook has struggled with depression since early childhood. She tried the whole range of antidepressant medications and therapy for decades. Nothing worked. One night, five years ago, driving home from work, she had one thought in her mind: this is it. I’m done.

Luckily she made it home safe. And soon she was offered an intriguing new possibility to tackle her symptoms—a little chip, implanted into her brain, that captures the unique neural signals encoding her depression. Once the implant detects those signals, it zaps them away with a brief electrical jolt, like adding noise to an enemy’s digital transmissions to scramble their original message. When that message triggers depression, hijacking neural communications is exactly what we want to do.

Flash forward several years, and Sarah has her depression under control for the first time in her life. Her suicidal thoughts evaporated. After quitting her tech job due to her condition, she’s now back on her feet, enrolled in data analytics classes and taking care of her elderly mother. “For the first time,” she said, “I’m finally laughing.”

Sarah’s recovery is just one case. But it signifies a new era for the technology underlying her stunning improvement. It’s one of the first cases in which a personalized “brain pacemaker” can stealthily tap into, decipher, and alter a person’s mood and introspection based on their own unique electrical brain signatures. And while those implants have achieved stunning medical miracles in other areas—such as allowing people with paralysis to walk again—Sarah’s recovery is some of the strongest evidence yet that a computer chip, in a brain, powered by AI, can fundamentally alter our perception of life. It’s the closest to reading and repairing a troubled mind that we’ve ever gotten.

“We haven’t been able to do this kind of personalized therapy previously in psychiatry,” said study lead Dr. Katherine Scangos at UCSF. “This success in itself is an incredible advancement in our knowledge of the brain function that underlies mental illness.”

Brain Pacemaker
The key to Sarah’s recovery is a brain-machine interface.

Roughly the size of a matchbox, the implant sits inside the brain, silently listening to and decoding its electrical signals. Using those signals, it’s possible to control other parts of the brain or body. Brain implants have given people with lower body paralysis the ability to walk again. They’ve allowed amputees to control robotic hands with just a thought. They’ve opened up a world of sensations, integrating feedback from cyborg-like artificial limbs that transmit signals directly into the brain.

But Sarah’s implant is different.

Sensation and movement are generally controlled by relatively well-defined circuits in the outermost layer of the brain: the cortex. Emotion and mood are also products of our brain’s electrical signals, but they tend to stem from deeper neural networks hidden at the center of the brain. One way to tap into those circuits is called deep brain stimulation (DBS), a method pioneered in the ’80s that’s been used to treat severe Parkinson’s disease and epilepsy, particularly for cases that don’t usually respond to medication.

Sarah’s neural implant takes this route: it listens in on the chatter between neurons deep within the brain to decode mood.

But where is mood in the brain? One particular problem, the authors explained, is that unlike movement, there is no “depression brain region.” Rather, emotions are regulated by intricate, intertwining networks across multiple brain regions. Adding to that complexity is the fact that we’re all neural snowflakes—each of us have uniquely personalized brain network connections.

In other words, zapping my circuit to reduce depression might not work for you. DBS, for example, has previously been studied for treating depression. But despite decades of research, it’s not federally approved due to inconsistent results. The culprit? The electrical stimulation patterns used in those trials were constant and engineered to be one-size-fits-all. Have you ever tried buying socks or PJs at a department store, seen the tag that says “one size,” and they don’t fit? Yeah. DBS has brought about remarkable improvements for some people with depression—ill-fitting socks are better than none in a pinch. But with increasingly sophisticated neuroengineering methods, we can do better.

The solution? Let’s make altering your brain more personal.

Unconscious Reprieve
That’s the route Sarah’s psychologist and UCSF neurosurgeon Dr. Edward Chang and colleagues took in the new study.

The first step in detecting depression-related activity in the brain was to be able to listen in. The team implanted 10 electrodes in Sarah’s brain, targeting multiple regions encoding emotion-related circuits. They then recorded electrical signals from these regions over the course of 10 days, while Sarah journaled about how she felt each day—happy or low. In the background, the team peeked into her brain activity patterns, a symphony of electrical signals in multiple frequencies, like overlapping waves on the ocean.

One particular brain wave emerged. It stemmed from the amygdala, a region normally involved in fear, lust, and other powerful emotions. Software-based mapping pinpointed the node as a powerful guide to Sarah’s mental state.

In contrast, another area tucked deep inside the brain, the ventral capsule/ventral striatum (VC/VS), emerged as a place to stimulate with little bouts of electricity to disrupt patterns leading to feelings of depression.

The team next implanted an FDA-approved neural pacemaker into the right brain lobe, with two sensing leads to capture activity from the amygdala and two stimulating wires to zap the VC/VS. The implant was previously used in epilepsy treatments and continuously senses neural activity. It’s both off-the-shelf and programmable, in that the authors could instruct it to detect “pre-specified patterns of activation” related to Sarah’s depressive episodes, and deliver short bursts of electrical stimulation only then. Just randomly stimulating the amygdala could “actually cause more stress and more depression symptoms,” said Dr. Chang in a press conference.

Brain surgery wasn’t easy. But to Sarah, drilling several holes into her brain was less difficult than the emotional pain of her depression. Every day during the trial, she waved a figure-eight-shaped wand over her head, which wirelessly captured 90 seconds of her brain’s electrical activity while reporting on her mental health.

When the stimulator turned on (even when she wasn’t aware it was on), “a joyous feeling just washed over me,” she said.

A New Neurological Future
For now, the results are just for one person. But if repeated—and Sarah could be a unique case—they suggest we’re finally at the point where we can tap into each unique person’s emotional mindset and fundamentally alter their perception of life.

And with that comes intense responsibility. Sarah’s neural “imprint” of her depression is tailored to her. It might be completely different for someone else. It’s something for future studies to dig into. But what’s clear is that it’s possible to regulate a person’s emotions with an AI-powered brain implant. And if other neurological disorders can be decoded in a similar way, we could use brain pacemakers to treat some of our toughest mental foes.

“God, the color differentiation is gorgeous,” said Sarah as her implant turned on. “I feel alert. I feel present.”

Image Credit: Sarah in her community garden, photo by John Lok/UCSF 2021 Continue reading

Posted in Human Robots