Tag Archives: new

#439736 Spot’s 3.0 Update Adds Increased ...

While Boston Dynamics' Atlas humanoid spends its time learning how to dance and do parkour, the company's Spot quadruped is quietly getting much better at doing useful, valuable tasks in commercial environments. Solving tasks like dynamic path planning and door manipulation in a way that's robust enough that someone can buy your robot and not regret it is, I would argue, just as difficult (if not more difficult) as getting a robot to do a backflip.
With a short blog post today, Boston Dynamics is announcing Spot Release 3.0, representing more than a year of software improvements over Release 2.0 that we covered back in May of 2020. The highlights of Release 3.0 include autonomous dynamic replanning, cloud integration, some clever camera tricks, and a new ability to handle push-bar doors, and earlier today, we spoke with Spot Chief Engineer at Boston Dynamics Zachary Jackowski to learn more about what Spot's been up to.
Here are some highlights from Spot's Release 3.0 software upgrade today, lifted from this blog post which has the entire list:
Mission planning: Save time by selecting which inspection actions you want Spot to perform, and it will take the shortest path to collect your data.Dynamic replanning: Don't miss inspections due to changes on site. Spot will replan around blocked paths to make sure you get the data you need.Repeatable image capture: Capture the same image from the same angle every time with scene-based camera alignment for the Spot CAM+ pan-tilt-zoom (PTZ) camera. Cloud-compatible: Connect Spot to AWS, Azure, IBM Maximo, and other systems with existing or easy-to-build integrations.Manipulation: Remotely operate the Spot Arm with ease through rear Spot CAM integration and split-screen view. Arm improvements also include added functionality for push-bar doors, revamped grasping UX, and updated SDK.Sounds: Keep trained bystanders aware of Spot with configurable warning sounds.The focus here is not just making Spot more autonomous, but making Spot more autonomous in some very specific ways that are targeted towards commercial usefulness. It's tempting to look at this stuff and say that it doesn't represent any massive new capabilities. But remember that Spot is a product, and its job is to make money, which is an enormous challenge for any robot, much less a relatively expensive quadruped.

For more details on the new release and a general update about Spot, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.
IEEE Spectrum: So what's new with Spot 3.0, and why is this release important?
Zachary Jackowski: We've been focusing heavily on flexible autonomy that really works for our industrial customers. The thing that may not quite come through in the blog post is how iceberg-y making autonomy work on real customer sites is. Our blog post has some bullet points about “dynamic replanning” in maybe 20 words, but in doing that, we actually reengineered almost our entire autonomy system based on the failure modes of what we were seeing on our customer sites.
The biggest thing that changed is that previously, our robot mission paradigm was a linear mission where you would take the robot around your site and record a path. Obviously, that was a little bit fragile on complex sites—if you're on a construction site and someone puts a pallet in your path, you can't follow that path anymore. So we ended up engineering our autonomy system to do building scale mapping, which is a big part of why we're calling it Spot 3.0. This is state-of-the-art from an academic perspective, except that it's volume shipping in a real product, which to me represents a little bit of our insanity.
And one super cool technical nugget in this release is that we have a powerful pan/tilt/zoom camera on the robot that our customers use to take images of gauges and panels. We've added scene-based alignment and also computer vision model-based alignment so that the robot can capture the images from the same perspective, every time, perfectly framed. In pictures of the robot, you can see that there's this crash cage around the camera, but the image alignment stuff actually does inverse kinematics to command the robot's body to shift a little bit if the cage is including anything important in the frame.
When Spot is dynamically replanning around obstacles, how much flexibility does it have in where it goes?
There are a bunch of tricks to figuring out when to give up on a blocked path, and then it's very simple run of the mill route planning within an existing map. One of the really big design points of our system, which we spent a lot of time talking about during the design phase, is that it turns out in these high value facilities people really value predictability. So it's not desired that the robot starts wandering around trying to find its way somewhere.
Do you think that over time, your customers will begin to trust the robot with more autonomy and less predictability?
I think so, but there's a lot of trust to be built there. Our customers have to see the robot to do the job well for a significant amount of time, and that will come.
Can you talk a bit more about trying to do state-of-the-art work on a robot that's being deployed commercially?
I can tell you about how big the gap is. When we talk about features like this, our engineers are like, “oh yeah I could read this paper and pull this algorithm and code something up over a weekend and see it work.” It's easy to get a feature to work once, make a really cool GIF, and post it to the engineering group chat room. But if you take a look at what it takes to actually ship a feature at product-level, we're talking person-years to have it reach the level of quality that someone is accustomed to buying an iPhone and just having it work perfectly all the time. You have to write all the code to product standards, implement all your tests, and get everything right there, and then you also have to visit a lot of customers, because the thing that's different about mobile robotics as a product is that it's all about how the system responds to environments that it hasn't seen before.
The blog post calls Spot 3.0 “A Sensing Solution for the Real World.” What is the real world for Spot at this point, and how will that change going forward?
For Spot, 'real world' means power plants, electrical switch yards, chemical plants, breweries, automotive plants, and other living and breathing industrial facilities that have never considered the fact that a robot might one day be walking around in them. It's indoors, it's outdoors, in the dark and in direct sunlight. When you're talking about the geometric aspect of sites, that complexity we're getting pretty comfortable with.
I think the frontiers of complexity for us are things like, how do you work in a busy place with lots of untrained humans moving through it—that's an area where we're investing a lot, but it's going to be a big hill to climb and it'll take a little while before we're really comfortable in environments like that. Functional safety, certified person detectors, all that good stuff, that's a really juicy unsolved field.
Spot can now open push-bar doors, which seems like an easier problem than doors with handles, which Spot learned to open a while ago. Why'd you start with door handles first?
Push-bar doors is an easier problem! But being engineers, we did the harder problem first, because we wanted to get it done. Continue reading

Posted in Human Robots

#439721 New Study Finds a Single Neuron Is a ...

Comparing brains to computers is a long and dearly held analogy in both neuroscience and computer science.

It’s not hard to see why.

Our brains can perform many of the tasks we want computers to handle with an easy, mysterious grace. So, it goes, understanding the inner workings of our minds can help us build better computers; and those computers can help us better understand our own minds. Also, if brains are like computers, knowing how much computation it takes them to do what they do can help us predict when machines will match minds.

Indeed, there’s already a productive flow of knowledge between the fields.

Deep learning, a powerful form of artificial intelligence, for example, is loosely modeled on the brain’s vast, layered networks of neurons.

You can think of each “node” in a deep neural network as an artificial neuron. Like neurons, nodes receive signals from other nodes connected to them and perform mathematical operations to transform input into output.

Depending on the signals a node receives, it may opt to send its own signal to all the nodes in its network. In this way, signals cascade through layer upon layer of nodes, progressively tuning and sharpening the algorithm.

The brain works like this too. But the keyword above is loosely.

Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex.

In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI.

“[The result] forms a bridge from biological neurons to artificial neurons,” Andreas Tolias, a computational neuroscientist at Baylor College of Medicine, told Quanta last week.

Amazing Brains
Neurons are the cells that make up our brains. There are many different types of neurons, but generally, they have three parts: spindly, branching structures called dendrites, a cell body, and a root-like axon.

On one end, dendrites connect to a network of other neurons at junctures called synapses. At the other end, the axon forms synapses with a different population of neurons. Each cell receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes).

To computationally compare biological and artificial neurons, the team asked: How big of an artificial neural network would it take to simulate the behavior of a single biological neuron?

First, they built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex). The model used some 10,000 differential equations to simulate how and when the neuron would translate a series of input signals into a spike of its own.

They then fed inputs into their simulated neuron, recorded the outputs, and trained deep learning algorithms on all the data. Their goal? Find the algorithm that could most accurately approximate the model.

(Video: A model of a pyramidal neuron (left) receives signals through its dendritic branches. In this case, the signals provoke three spikes.)

They increased the number of layers in the algorithm until it was 99 percent accurate at predicting the simulated neuron’s output given a set of inputs. The sweet spot was at least five layers but no more than eight, or around 1,000 artificial neurons per biological neuron. The deep learning algorithm was much simpler than the original model—but still quite complex.

From where does this complexity arise?

As it turns out, it’s mostly due to a type of chemical receptor in dendrites—the NMDA ion channel—and the branching of dendrites in space. “Take away one of those things, and a neuron turns [into] a simple device,” lead author David Beniaguev tweeted in 2019, describing an earlier version of the work published as a preprint.

Indeed, after removing these features, the team found they could match the simplified biological model with but a single-layer deep learning algorithm.

A Moving Benchmark
It’s tempting to extrapolate the team’s results to estimate the computational complexity of the whole brain. But we’re nowhere near such a measure.

For one, it’s possible the team didn’t find the most efficient algorithm.

It’s common for the the developer community to rapidly improve upon the first version of an advanced deep learning algorithm. Given the intensive iteration in the study, the team is confident in the results, but they also released the model, data, and algorithm to the scientific community to see if anyone could do better.

Also, the model neuron is from a rat’s brain, as opposed to a human’s, and it’s only one type of brain cell. Further, the study is comparing a model to a model—there is, as of yet, no way to make a direct comparison to a physical neuron in the brain. It’s entirely possible the real thing is more, not less, complex.

Still, the team believes their work can push neuroscience and AI forward.

In the former case, the study is further evidence dendrites are complicated critters worthy of more attention. In the latter, it may lead to radical new algorithmic architectures.

Idan Segev, a coauthor on the paper, suggests engineers should try replacing the simple artificial neurons in today’s algorithms with a mini five-layer network simulating a biological neuron. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” Segev said.

Whether so much added complexity would pay off is uncertain. Experts debate how much of the brain’s detail algorithms need to capture to achieve similar or better results.

But it’s hard to argue with millions of years of evolutionary experimentation. So far, following the brain’s blueprint has been a rewarding strategy. And if this work is any indication, future neural networks may well dwarf today’s in size and complexity.

Image Credit: NICHD/S. Jeong Continue reading

Posted in Human Robots

#439614 Watch Boston Dynamics’ Atlas Robot ...

At the end of 2020, Boston Dynamics released a spirits-lifting, can’t-watch-without-smiling video of its robots doing a coordinated dance routine. Atlas, Spot, and Handle had some pretty sweet moves, though if we’re being honest, Atlas was the one (or, in this case, two) that really stole the show.

A new video released yesterday has the bipedal humanoid robot stealing the show again, albeit in a way that probably won’t make you giggle as much. Two Atlases navigate a parkour course, complete with leaping onto and between boxes of different heights, shimmying down a balance beam, and throwing synchronized back flips.

The big question that may be on many viewers’ minds is whether the robots are truly navigating the course on their own—making real-time decisions about how high to jump or how far to extend a foot—or if they’re pre-programmed to execute each motion according to a detailed map of the course.

As engineers explain in a second new video and accompanying blog post, it’s a combination of both.

Atlas is equipped with RGB cameras and depth sensors to give it “vision,” providing input to its control system, which is run on three computers. In the dance video linked above and previous videos of Atlas doing parkour, the robot wasn’t sensing its environment and adapting its movements accordingly (though it did make in-the-moment adjustments to keep its balance).

But in the new routine, the Boston Dynamics team says, they created template behaviors for Atlas. The robot can match these templates to its environment, adapting its motions based on what’s in front of it. The engineers had to find a balance between “long-term” goals for the robot—i.e., making it through the whole course—and “short-term” goals, like adjusting its footsteps and posture to keep from keeling over. The motions were refined through both computer simulations and robot testing.

“Our control team has to create algorithms that can reason about the physical complexity of these machines to create a broad set of high energy and coordinated behavior,” said Atlas team lead Scott Kuindersma. “It’s really about creating behaviors at the limits of the robot’s capabilities and getting them all to work together in a flexible control system.”

The limits of the robot’s capabilities were frequently reached while practicing the new parkour course, and getting a flawless recording took many tries. The explainer video includes bloopers of Atlas falling flat on its face—not to mention on its head, stomach, and back, as it under-rotates for flips, crosses its feet while running, and miscalculates the distance it needs to cover on jumps.

I know it’s a robot, but you can’t help feeling sort of bad for it, especially when its feet miss the platform (by a lot) on a jump and its whole upper body comes crashing onto said platform, while its legs dangle toward the ground, in a move that would severely injure a human (and makes you wonder if Atlas survived with its hardware intact).

Ultimately, Atlas is a research and development tool, not a product the company plans to sell commercially (which is probably good, because despite how cool it looks doing parkour, I for one would be more than a little wary if I came across this human-shaped hunk of electronics wandering around in public).

“I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Kuindersma said. “But we’re still in the early days of creating that future.”

Image Credit: Boston Dynamics Continue reading

Posted in Human Robots

#439537 Tencent’s New Wheeled Robot Flicks Its ...

Ollie (I think its name is Ollie) is a “a novel wheel-legged robot” from Tencent Robotics. The word “novel” is used quite appropriately here, since Ollie sports some unusual planar parallel legs atop driven wheels. It’s also got a multifunctional actuated tail that not only enables some impressive acrobatics, but also allows the robot to transition from biped-ish to triped-ish to stand up extra tall and support a coffee-carrying manipulator.

It’s a little disappointing that the tail only appears to be engaged for specific motions—it doesn’t seem like it’s generally part of the robot’s balancing or motion planning, which feels like a missed opportunity. But this robot is relatively new, and its development is progressing rapidly, which we know because an earlier version of the hardware and software was presented at ICRA 2021 a couple weeks back. Although, to be honest with you, there isn’t a lot of info on the new one besides the above video, so we’ll be learning what we can from the ICRA paper.

The paper is mostly about developing a nonlinear balancing controller for the robot, and they’ve done a bang-up job with it, with the robot remaining steady even while executing sequences of dynamic motions. The jumping and one-legged motions are particularly cool to watch. And, well, that’s pretty much it for the ICRA paper, which (unfortunately) barely addresses the tail at all, except to say that currently the control system assumes that the tail is fixed. We’re guessing that this is just a symptom of the ICRA paper submission deadline being back in October, and that a lot of progress has been made since then.

Seeing the arm and sensor package at the end of the video is a nod to some sort of practical application, and I suppose that the robot’s ability to stand up to reach over that counter is some justification for using it for a delivery task. But it seems like it’s got so much more to offer, you know? Many far more boring platforms robots could be delivering coffee, so let’s find something for this robot to do that involves more backflips.

Balance Control of a Novel Wheel-legged Robot: Design and Experiments, by Shuai Wang, Leilei Cui, Jingfan Zhang, Jie Lai, Dongsheng Zhang, Ke Chen, Yu Zheng, Zhengyou Zhang, and Zhong-Ping Jiang from Tencent Robotics X, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439437 Google parent launches new ...

Google's parent Alphabet unveiled a new “moonshot” project to develop software for robotics which could be used in a wide range of industries. Continue reading

Posted in Human Robots