Tag Archives: new

#439938 Tiny bubbles: Researchers develop a ...

Princeton researchers have invented bubble casting, a new way to make soft robots using “fancy balloons” that change shape in predictable ways when inflated with air. Continue reading

Posted in Human Robots

#439934 New Spiking Neuromorphic Chip Could ...

When it comes to brain computing, timing is everything. It’s how neurons wire up into circuits. It’s how these circuits process highly complex data, leading to actions that can mean life or death. It’s how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption.

To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain’s architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence.

But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What’s missing—yet tantamount to our brain’s inner working—is timing.

This month, a team affiliated with the Human Brain Project, the European Union’s flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions.

“Compared to the abstract neural networks used in deep learning, the more biological archetypes…still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said.

In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech.

To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand … to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said.

Let’s Talk Spikes
At the root of the new algorithm is a fundamental principle in brain computing: spikes.

Let’s take a look at a highly abstracted neuron. It’s like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end.

Here’s the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron.

But neurons don’t just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: ­the timing of when an electrical burst occurs carries a wealth of data. It’s the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing.

So why not adopt the same strategy for neuromorphic computers?

A Spartan Brain-Like Chip
Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.

The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sparse way to encode a neuron’s activity, but comes with perks. Because only the latency to the first time a neuron perks up is used to encode activation, it captures the neuron’s responsiveness without overwhelming a computer with too many data points. In other words, it’s fast, energy-efficient, and easy.

The team next encoded the algorithm onto a neuromorphic chip—the BrainScaleS-2, which roughly emulates simple “neurons” inside its structure, but runs over 1,000 times faster than our biological brains. The platform has over 500 physical artificial neurons, each capable of receiving 256 inputs through configurable synapses, where biological neurons swap, process, and store information.

The setup is a hybrid. “Learning” is achieved on a chip that implements the time-dependent algorithm. However, any updates to the neural circuit—that is, how strongly one neuron connects to another—is achieved through an external workstation, something dubbed “in-the-loop training.”

In a first test, the algorithm was challenged with the “Yin-Yang” task, which requires the algorithm to parse different areas in the traditional Eastern symbol. The algorithm excelled, with an average of 95 percent accuracy.

The team next challenged the setup with a classic deep learning task—MNIST, a dataset of handwritten numbers that revolutionized computer vision. The algorithm excelled again, with nearly 97 percent accuracy. Even more impressive, the BrainScaleS-2 system took less than one second to classify 10,000 test samples, with extremely low relative energy consumption.

Putting these results into context, the team next compared BrainScaleS-2’s performance—armed with the new algorithm—to commercial and other neuromorphic platforms. Take SpiNNaker, a massive, parallel distributed architecture that also mimics neural computing and spikes. The new algorithm was over 100 times faster at image recognition while consuming just a fraction of the power SpiNNaker consumes. Similar results were seen with True North, the harbinger IBM neuromorphic chip.

What Next?
The brain’s two most valuable computing features—energy efficiency and parallel processing—are now heavily inspiring the next generation of computer chips. The goal? Build machines that are as flexible and adaptive as our own brains while using just a fraction of the energy required for our current silicon-based chips.

Yet compared to deep learning, which relies on artificial neural networks, biologically-plausible ones have languished. Part of this, explained Frenkel, is the difficultly of “updating” these circuits through learning. However, with BrainScaleS-2 and a touch of timing data, it’s now possible.

At the same time, having an “external” arbitrator for updating synaptic connections gives the whole system some time to breathe. Neuromorphic hardware, similar to the messiness of our brain computation, is littered with mismatches and errors. With the chip and an external arbitrator, the whole system can learn to adapt to this variability, and eventually compensate for—or even exploit—its quirks for faster and more flexible learning.

For Frenkel, the algorithm’s power lies in its sparseness. The brain, she explained, is powered by sparse codes that “could explain the fast reaction times…such as for visual processing.” Rather than activating entire brain regions, only a few neural networks are needed—like whizzing down empty highways instead of getting stuck in rush hour traffic.

Despite its power, the algorithm still has hiccups. It struggles with interpreting static data, although it excels with time sequences—for example, speech or biosignals. But to Frenkel, it’s the start of a new framework: important information can be encoded with a flexible but simple metric, and generalized to enrich brain- and AI-based data processing with a fraction of the traditional energy costs.

“[It]…may be an important stepping-stone for spiking neuromorphic hardware to finally demonstrate a competitive advantage over conventional neural network approaches,” she said.

Image Credit: Classifying data points in the Yin-Yang dataset, by Göltz and Kriener et al. (Heidelberg / Bern) Continue reading

Posted in Human Robots

#439736 Spot’s 3.0 Update Adds Increased ...

While Boston Dynamics' Atlas humanoid spends its time learning how to dance and do parkour, the company's Spot quadruped is quietly getting much better at doing useful, valuable tasks in commercial environments. Solving tasks like dynamic path planning and door manipulation in a way that's robust enough that someone can buy your robot and not regret it is, I would argue, just as difficult (if not more difficult) as getting a robot to do a backflip.
With a short blog post today, Boston Dynamics is announcing Spot Release 3.0, representing more than a year of software improvements over Release 2.0 that we covered back in May of 2020. The highlights of Release 3.0 include autonomous dynamic replanning, cloud integration, some clever camera tricks, and a new ability to handle push-bar doors, and earlier today, we spoke with Spot Chief Engineer at Boston Dynamics Zachary Jackowski to learn more about what Spot's been up to.
Here are some highlights from Spot's Release 3.0 software upgrade today, lifted from this blog post which has the entire list:
Mission planning: Save time by selecting which inspection actions you want Spot to perform, and it will take the shortest path to collect your data.Dynamic replanning: Don't miss inspections due to changes on site. Spot will replan around blocked paths to make sure you get the data you need.Repeatable image capture: Capture the same image from the same angle every time with scene-based camera alignment for the Spot CAM+ pan-tilt-zoom (PTZ) camera. Cloud-compatible: Connect Spot to AWS, Azure, IBM Maximo, and other systems with existing or easy-to-build integrations.Manipulation: Remotely operate the Spot Arm with ease through rear Spot CAM integration and split-screen view. Arm improvements also include added functionality for push-bar doors, revamped grasping UX, and updated SDK.Sounds: Keep trained bystanders aware of Spot with configurable warning sounds.The focus here is not just making Spot more autonomous, but making Spot more autonomous in some very specific ways that are targeted towards commercial usefulness. It's tempting to look at this stuff and say that it doesn't represent any massive new capabilities. But remember that Spot is a product, and its job is to make money, which is an enormous challenge for any robot, much less a relatively expensive quadruped.

For more details on the new release and a general update about Spot, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.
IEEE Spectrum: So what's new with Spot 3.0, and why is this release important?
Zachary Jackowski: We've been focusing heavily on flexible autonomy that really works for our industrial customers. The thing that may not quite come through in the blog post is how iceberg-y making autonomy work on real customer sites is. Our blog post has some bullet points about “dynamic replanning” in maybe 20 words, but in doing that, we actually reengineered almost our entire autonomy system based on the failure modes of what we were seeing on our customer sites.
The biggest thing that changed is that previously, our robot mission paradigm was a linear mission where you would take the robot around your site and record a path. Obviously, that was a little bit fragile on complex sites—if you're on a construction site and someone puts a pallet in your path, you can't follow that path anymore. So we ended up engineering our autonomy system to do building scale mapping, which is a big part of why we're calling it Spot 3.0. This is state-of-the-art from an academic perspective, except that it's volume shipping in a real product, which to me represents a little bit of our insanity.
And one super cool technical nugget in this release is that we have a powerful pan/tilt/zoom camera on the robot that our customers use to take images of gauges and panels. We've added scene-based alignment and also computer vision model-based alignment so that the robot can capture the images from the same perspective, every time, perfectly framed. In pictures of the robot, you can see that there's this crash cage around the camera, but the image alignment stuff actually does inverse kinematics to command the robot's body to shift a little bit if the cage is including anything important in the frame.
When Spot is dynamically replanning around obstacles, how much flexibility does it have in where it goes?
There are a bunch of tricks to figuring out when to give up on a blocked path, and then it's very simple run of the mill route planning within an existing map. One of the really big design points of our system, which we spent a lot of time talking about during the design phase, is that it turns out in these high value facilities people really value predictability. So it's not desired that the robot starts wandering around trying to find its way somewhere.
Do you think that over time, your customers will begin to trust the robot with more autonomy and less predictability?
I think so, but there's a lot of trust to be built there. Our customers have to see the robot to do the job well for a significant amount of time, and that will come.
Can you talk a bit more about trying to do state-of-the-art work on a robot that's being deployed commercially?
I can tell you about how big the gap is. When we talk about features like this, our engineers are like, “oh yeah I could read this paper and pull this algorithm and code something up over a weekend and see it work.” It's easy to get a feature to work once, make a really cool GIF, and post it to the engineering group chat room. But if you take a look at what it takes to actually ship a feature at product-level, we're talking person-years to have it reach the level of quality that someone is accustomed to buying an iPhone and just having it work perfectly all the time. You have to write all the code to product standards, implement all your tests, and get everything right there, and then you also have to visit a lot of customers, because the thing that's different about mobile robotics as a product is that it's all about how the system responds to environments that it hasn't seen before.
The blog post calls Spot 3.0 “A Sensing Solution for the Real World.” What is the real world for Spot at this point, and how will that change going forward?
For Spot, 'real world' means power plants, electrical switch yards, chemical plants, breweries, automotive plants, and other living and breathing industrial facilities that have never considered the fact that a robot might one day be walking around in them. It's indoors, it's outdoors, in the dark and in direct sunlight. When you're talking about the geometric aspect of sites, that complexity we're getting pretty comfortable with.
I think the frontiers of complexity for us are things like, how do you work in a busy place with lots of untrained humans moving through it—that's an area where we're investing a lot, but it's going to be a big hill to climb and it'll take a little while before we're really comfortable in environments like that. Functional safety, certified person detectors, all that good stuff, that's a really juicy unsolved field.
Spot can now open push-bar doors, which seems like an easier problem than doors with handles, which Spot learned to open a while ago. Why'd you start with door handles first?
Push-bar doors is an easier problem! But being engineers, we did the harder problem first, because we wanted to get it done. Continue reading

Posted in Human Robots

#439721 New Study Finds a Single Neuron Is a ...

Comparing brains to computers is a long and dearly held analogy in both neuroscience and computer science.

It’s not hard to see why.

Our brains can perform many of the tasks we want computers to handle with an easy, mysterious grace. So, it goes, understanding the inner workings of our minds can help us build better computers; and those computers can help us better understand our own minds. Also, if brains are like computers, knowing how much computation it takes them to do what they do can help us predict when machines will match minds.

Indeed, there’s already a productive flow of knowledge between the fields.

Deep learning, a powerful form of artificial intelligence, for example, is loosely modeled on the brain’s vast, layered networks of neurons.

You can think of each “node” in a deep neural network as an artificial neuron. Like neurons, nodes receive signals from other nodes connected to them and perform mathematical operations to transform input into output.

Depending on the signals a node receives, it may opt to send its own signal to all the nodes in its network. In this way, signals cascade through layer upon layer of nodes, progressively tuning and sharpening the algorithm.

The brain works like this too. But the keyword above is loosely.

Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex.

In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI.

“[The result] forms a bridge from biological neurons to artificial neurons,” Andreas Tolias, a computational neuroscientist at Baylor College of Medicine, told Quanta last week.

Amazing Brains
Neurons are the cells that make up our brains. There are many different types of neurons, but generally, they have three parts: spindly, branching structures called dendrites, a cell body, and a root-like axon.

On one end, dendrites connect to a network of other neurons at junctures called synapses. At the other end, the axon forms synapses with a different population of neurons. Each cell receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes).

To computationally compare biological and artificial neurons, the team asked: How big of an artificial neural network would it take to simulate the behavior of a single biological neuron?

First, they built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex). The model used some 10,000 differential equations to simulate how and when the neuron would translate a series of input signals into a spike of its own.

They then fed inputs into their simulated neuron, recorded the outputs, and trained deep learning algorithms on all the data. Their goal? Find the algorithm that could most accurately approximate the model.

(Video: A model of a pyramidal neuron (left) receives signals through its dendritic branches. In this case, the signals provoke three spikes.)

They increased the number of layers in the algorithm until it was 99 percent accurate at predicting the simulated neuron’s output given a set of inputs. The sweet spot was at least five layers but no more than eight, or around 1,000 artificial neurons per biological neuron. The deep learning algorithm was much simpler than the original model—but still quite complex.

From where does this complexity arise?

As it turns out, it’s mostly due to a type of chemical receptor in dendrites—the NMDA ion channel—and the branching of dendrites in space. “Take away one of those things, and a neuron turns [into] a simple device,” lead author David Beniaguev tweeted in 2019, describing an earlier version of the work published as a preprint.

Indeed, after removing these features, the team found they could match the simplified biological model with but a single-layer deep learning algorithm.

A Moving Benchmark
It’s tempting to extrapolate the team’s results to estimate the computational complexity of the whole brain. But we’re nowhere near such a measure.

For one, it’s possible the team didn’t find the most efficient algorithm.

It’s common for the the developer community to rapidly improve upon the first version of an advanced deep learning algorithm. Given the intensive iteration in the study, the team is confident in the results, but they also released the model, data, and algorithm to the scientific community to see if anyone could do better.

Also, the model neuron is from a rat’s brain, as opposed to a human’s, and it’s only one type of brain cell. Further, the study is comparing a model to a model—there is, as of yet, no way to make a direct comparison to a physical neuron in the brain. It’s entirely possible the real thing is more, not less, complex.

Still, the team believes their work can push neuroscience and AI forward.

In the former case, the study is further evidence dendrites are complicated critters worthy of more attention. In the latter, it may lead to radical new algorithmic architectures.

Idan Segev, a coauthor on the paper, suggests engineers should try replacing the simple artificial neurons in today’s algorithms with a mini five-layer network simulating a biological neuron. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” Segev said.

Whether so much added complexity would pay off is uncertain. Experts debate how much of the brain’s detail algorithms need to capture to achieve similar or better results.

But it’s hard to argue with millions of years of evolutionary experimentation. So far, following the brain’s blueprint has been a rewarding strategy. And if this work is any indication, future neural networks may well dwarf today’s in size and complexity.

Image Credit: NICHD/S. Jeong Continue reading

Posted in Human Robots

#439614 Watch Boston Dynamics’ Atlas Robot ...

At the end of 2020, Boston Dynamics released a spirits-lifting, can’t-watch-without-smiling video of its robots doing a coordinated dance routine. Atlas, Spot, and Handle had some pretty sweet moves, though if we’re being honest, Atlas was the one (or, in this case, two) that really stole the show.

A new video released yesterday has the bipedal humanoid robot stealing the show again, albeit in a way that probably won’t make you giggle as much. Two Atlases navigate a parkour course, complete with leaping onto and between boxes of different heights, shimmying down a balance beam, and throwing synchronized back flips.

The big question that may be on many viewers’ minds is whether the robots are truly navigating the course on their own—making real-time decisions about how high to jump or how far to extend a foot—or if they’re pre-programmed to execute each motion according to a detailed map of the course.

As engineers explain in a second new video and accompanying blog post, it’s a combination of both.

Atlas is equipped with RGB cameras and depth sensors to give it “vision,” providing input to its control system, which is run on three computers. In the dance video linked above and previous videos of Atlas doing parkour, the robot wasn’t sensing its environment and adapting its movements accordingly (though it did make in-the-moment adjustments to keep its balance).

But in the new routine, the Boston Dynamics team says, they created template behaviors for Atlas. The robot can match these templates to its environment, adapting its motions based on what’s in front of it. The engineers had to find a balance between “long-term” goals for the robot—i.e., making it through the whole course—and “short-term” goals, like adjusting its footsteps and posture to keep from keeling over. The motions were refined through both computer simulations and robot testing.

“Our control team has to create algorithms that can reason about the physical complexity of these machines to create a broad set of high energy and coordinated behavior,” said Atlas team lead Scott Kuindersma. “It’s really about creating behaviors at the limits of the robot’s capabilities and getting them all to work together in a flexible control system.”

The limits of the robot’s capabilities were frequently reached while practicing the new parkour course, and getting a flawless recording took many tries. The explainer video includes bloopers of Atlas falling flat on its face—not to mention on its head, stomach, and back, as it under-rotates for flips, crosses its feet while running, and miscalculates the distance it needs to cover on jumps.

I know it’s a robot, but you can’t help feeling sort of bad for it, especially when its feet miss the platform (by a lot) on a jump and its whole upper body comes crashing onto said platform, while its legs dangle toward the ground, in a move that would severely injure a human (and makes you wonder if Atlas survived with its hardware intact).

Ultimately, Atlas is a research and development tool, not a product the company plans to sell commercially (which is probably good, because despite how cool it looks doing parkour, I for one would be more than a little wary if I came across this human-shaped hunk of electronics wandering around in public).

“I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Kuindersma said. “But we’re still in the early days of creating that future.”

Image Credit: Boston Dynamics Continue reading

Posted in Human Robots