Tag Archives: jump
#439004 Video Friday: A Walking, Wheeling ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.
This is a pretty terrible video, I think because it was harvested from WeChat, which is where Tencent decided to premiere its new quadruped robot.
Not bad, right? Its name is Max, it has a top speed of 25 kph thanks to its elbow wheels, and we know almost nothing else about it.
[ Tencent ]
Thanks Fan!
Can't bring yourself to mask-shame others? Build a robot to do it for you instead!
[ GitHub ]
Researchers at Georgia Tech have recently developed an entirely soft, long-stroke electromagnetic actuator using liquid metal, compliant magnetic composites, and silicone polymers. The robot was inspired by the motion of the Xenia coral, which pulses its polyps to circulate oxygen under water to promote photosynthesis.
In this work, power applied to soft coils generates an electromagnetic field, which causes the internal compliant magnet to move upward. This forces the squishy silicone linkages to convert linear to the rotational motion with an arclength of up to 42 mm with a bandwidth up to 30 Hz. This highly deformable, fast, and long-stroke actuator topology can be utilized for a variety of applications from biomimicry to fully-soft grasping to wearables applications.
[ Paper ] via [ Georgia Tech ]
Thanks Noah!
Jueying Mini Lite may look a little like a Boston Dynamics Spot, but according to DeepRobotics, its coloring is based on Bruce Lee's Kung Fu clothes.
[ DeepRobotics ]
Henrique writes, “I would like to share with you the supplementary video of our recent work accepted to ICRA 2021. The video features a quadruped and a full-size humanoid performing dynamic jumps, after a brief animated intro of what direct transcription is. Me and my colleagues have put a lot of hard work into this, and I am very proud of the results.”
Making big robots jump is definitely something to be proud of!
[ SLMC Edinburgh ]
Thanks Henrique!
The finals of the Powered Exoskeleton Race for Cybathlon Global 2020.
[ Cybathlon ]
Thanks Fan!
It's nice that every once in a while, the world can get excited about science and robots.
[ NASA ]
Playing the Imperial March over footage of an army of black quadrupeds may not be sending quite the right message.
[ Unitree ]
Kod*lab PhD students Abriana Stewart-Height, Diego Caporale and Wei-Hsi Chen, with former Kod*lab student Garrett Wenger were on set in the summer of 2019 to operate RHex for the filming of Lapsis, a first feature film by director and screenwriter Noah Hutton.
[ Kod*lab ]
In class 2.008, Design and Manufacturing II, mechanical engineering students at MIT learn the fundamental principles of manufacturing at scale by designing and producing their own yo-yos. Instructors stress the importance of sustainable practices in the global supply chain.
[ MIT ]
A short history of robotics, from ABB.
[ ABB ]
In this paper, we propose a whole-body planning framework that unifies dynamic locomotion and manipulation tasks by formulating a single multi-contact optimal control problem. This is demonstrated in a set of real hardware experiments done in free-motion, such as base or end-effector pose tracking, and while pushing/pulling a heavy resistive door. Robustness against model mismatches and external disturbances is also verified during these test cases.
[ Paper ]
This paper presents PANTHER, a real-time perception-aware (PA) trajectory planner in dynamic environments. PANTHER plans trajectories that avoid dynamic obstacles while also keeping them in the sensor field of view (FOV) and minimizing the blur to aid in object tracking.
Extensive hardware experiments in unknown dynamic environments with all the computation running onboard are presented, with velocities of up to 5.8 m/s, and with relative velocities (with respect to the obstacles) of up to 6.3 m/s. The only sensors used are an IMU, a forward-facing depth camera, and a downward-facing monocular camera.
[ MIT ]
With our SaaS solution, we enable robots to inspect industrial facilities. One of the robots our software supports, is the Boston Dynamics Spot robot. In this video we demonstrate how autonomous industrial inspection with the Boston Dynamics Spot Robot is performed with our teach and repeat solution.
[ Energy Robotics ]
In this week’s episode of Tech on Deck, learn about our first technology demonstration sent to Station: The Robotic Refueling Mission. This tech demo helped us develop the tools and techniques needed to robotically refuel a satellite in space, an important capability for space exploration.
[ NASA ]
At Covariant we are committed to research and development that will bring AI Robotics to the real world. As a part of this, we believe it's important to educate individuals on how these exciting innovations will make a positive, fundamental and global impact for years to come. In this presentation, our co-founder Pieter Abbeel breaks down his thoughts on the current state of play for AI robotics.
[ Covariant ]
How do you fly a helicopter on Mars? It takes Ingenuity and Perseverance. During this technology demo, Farah Alibay and Tim Canham will get into the details of how these craft will manage this incredible task.
[ NASA ]
Complex real-world environments continue to present significant challenges for fielding robotic teams, which often face expansive spatial scales, difficult and dynamic terrain, degraded environmental conditions, and severe communication constraints. Breakthrough technologies call for integrated solutions across autonomy, perception, networking, mobility, and human teaming thrusts. As such, the DARPA OFFSET program and the DARPA Subterranean Challenge seek novel approaches and new insights for discovering and demonstrating these innovative technologies, to help close critical gaps for robotic operations in complex urban and underground environments.
[ UPenn ] Continue reading →
#438801 This AI Thrashes the Hardest Atari Games ...
Learning from rewards seems like the simplest thing. I make coffee, I sip coffee, I’m happy. My brain registers “brewing coffee” as an action that leads to a reward.
That’s the guiding insight behind deep reinforcement learning, a family of algorithms that famously smashed most of Atari’s gaming catalog and triumphed over humans in strategy games like Go. Here, an AI “agent” explores the game, trying out different actions and registering ones that let it win.
Except it’s not that simple. “Brewing coffee” isn’t one action; it’s a series of actions spanning several minutes, where you’re only rewarded at the very end. By just tasting the final product, how do you learn to fine-tune grind coarseness, water to coffee ratio, brewing temperature, and a gazillion other factors that result in the reward—tasty, perk-me-up coffee?
That’s the problem with “sparse rewards,” which are ironically very abundant in our messy, complex world. We don’t immediately get feedback from our actions—no video-game-style dings or points for just grinding coffee beans—yet somehow we’re able to learn and perform an entire sequence of arm and hand movements while half-asleep.
This week, researchers from UberAI and OpenAI teamed up to bestow this talent on AI.
The trick is to encourage AI agents to “return” to a previous step, one that’s promising for a winning solution. The agent then keeps a record of that state, reloads it, and branches out again to intentionally explore other solutions that may have been left behind on the first go-around. Video gamers are likely familiar with this idea: live, die, reload a saved point, try something else, repeat for a perfect run-through.
The new family of algorithms, appropriately dubbed “Go-Explore,” smashed notoriously difficult Atari games like Montezuma’s Revenge that were previously unsolvable by its AI predecessors, while trouncing human performance along the way.
It’s not just games and digital fun. In a computer simulation of a robotic arm, the team found that installing Go-Explore as its “brain” allowed it to solve a challenging series of actions when given very sparse rewards. Because the overarching idea is so simple, the authors say, it can be adapted and expanded to other real-world problems, such as drug design or language learning.
Growing Pains
How do you reward an algorithm?
Rewards are very hard to craft, the authors say. Take the problem of asking a robot to go to a fridge. A sparse reward will only give the robot “happy points” if it reaches its destination, which is similar to asking a baby, with no concept of space and danger, to crawl through a potential minefield of toys and other obstacles towards a fridge.
“In practice, reinforcement learning works very well, if you have very rich feedback, if you can tell, ‘hey, this move is good, that move is bad, this move is good, that move is bad,’” said study author Joost Huinzinga. However, in situations that offer very little feedback, “rewards can intentionally lead to a dead end. Randomly exploring the space just doesn’t cut it.”
The other extreme is providing denser rewards. In the same robot-to-fridge example, you could frequently reward the bot as it goes along its journey, essentially helping “map out” the exact recipe to success. But that’s troubling as well. Over-holding an AI’s hand could result in an extremely rigid robot that ignores new additions to its path—a pet, for example—leading to dangerous situations. It’s a deceptive AI solution that seems effective in a simple environment, but crashes in the real world.
What we need are AI agents that can tackle both problems, the team said.
Intelligent Exploration
The key is to return to the past.
For AI, motivation usually comes from “exploring new or unusual situations,” said Huizinga. It’s efficient, but comes with significant downsides. For one, the AI agent could prematurely stop going back to promising areas because it thinks it had already found a good solution. For another, it could simply forget a previous decision point because of the mechanics of how it probes the next step in a problem.
For a complex task, the end result is an AI that randomly stumbles around towards a solution while ignoring potentially better ones.
“Detaching from a place that was previously visited after collecting a reward doesn’t work in difficult games, because you might leave out important clues,” Huinzinga explained.
Go-Explore solves these problems with a simple principle: first return, then explore. In essence, the algorithm saves different approaches it previously tried and loads promising save points—once more likely to lead to victory—to explore further.
Digging a bit deeper, the AI stores screen caps from a game. It then analyzes saved points and groups images that look alike as a potential promising “save point” to return to. Rinse and repeat. The AI tries to maximize its final score in the game, and updates its save points when it achieves a new record score. Because Atari doesn’t usually allow people to revisit any random point, the team used an emulator, which is a kind of software that mimics the Atari system but with custom abilities such as saving and reloading at any time.
The trick worked like magic. When pitted against 55 Atari games in the OpenAI gym, now commonly used to benchmark reinforcement learning algorithms, Go-Explore knocked out state-of-the-art AI competitors over 85 percent of the time.
It also crushed games previously unbeatable by AI. Montezuma’s Revenge, for example, requires you to move Pedro, the blocky protagonist, through a labyrinth of underground temples while evading obstacles such as traps and enemies and gathering jewels. One bad jump could derail the path to the next level. It’s a perfect example of sparse rewards: you need a series of good actions to get to the reward—advancing onward.
Go-Explore didn’t just beat all levels of the game, a first for AI. It also scored higher than any previous record for reinforcement learning algorithms at lower levels while toppling the human world record.
Outside a gaming environment, Go-Explore was also able to boost the performance of a simulated robot arm. While it’s easy for humans to follow high-level guidance like “put the cup on this shelf in a cupboard,” robots often need explicit training—from grasping the cup to recognizing a cupboard, moving towards it while avoiding obstacles, and learning motions to not smash the cup when putting it down.
Here, similar to the real world, the digital robot arm was only rewarded when it placed the cup onto the correct shelf, out of four possible shelves. When pitted against another algorithm, Go-Explore quickly figured out the movements needed to place the cup, while its competitor struggled with even reliably picking the cup up.
Combining Forces
By itself, the “first return, then explore” idea behind Go-Explore is already powerful. The team thinks it can do even better.
One idea is to change the mechanics of save points. Rather than reloading saved states through the emulator, it’s possible to train a neural network to do the same, without needing to relaunch a saved state. It’s a potential way to make the AI even smarter, the team said, because it can “learn” to overcome one obstacle once, instead of solving the same problem again and again. The downside? It’s much more computationally intensive.
Another idea is to combine Go-Explore with an alternative form of learning, called “imitation learning.” Here, an AI observes human behavior and mimics it through a series of actions. Combined with Go-Explore, said study author Adrien Ecoffet, this could make more robust robots capable of handling all the complexity and messiness in the real world.
To the team, the implications go far beyond Go-Explore. The concept of “first return, then explore” seems to be especially powerful, suggesting “it may be a fundamental feature of learning in general.” The team said, “Harnessing these insights…may be essential…to create generally intelligent agents.”
Image Credit: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, and Jeff Clune Continue reading →
#438080 Boston Dynamics’ Spot Robot Is Now ...
Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.
As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.
Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.
Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:
Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:
A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?
Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:
This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.
IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?
Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.
We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.
When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?
All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.
One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.
The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.
So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?
There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.
The video of Spot digging was pretty cool—how did that work?
That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.
The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?
A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.
Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.
Is Spot’s arm safe?
You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.
We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?
You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”
It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.
Photo: Boston Dynamics
There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.
During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.
The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”
Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.
Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”
Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.
There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading →
#438076 Boston Dynamics’ Spot Robot Is Now ...
Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.
As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.
Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.
Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:
Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:
A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?
Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:
This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.
IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?
Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.
We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.
When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?
All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.
One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.
The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.
So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?
There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.
The video of Spot digging was pretty cool—how did that work?
That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.
The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?
A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.
Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.
Is Spot’s arm safe?
You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.
We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?
You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”
It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.
Photo: Boston Dynamics
There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.
During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.
The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”
Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.
Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”
Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.
There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading →
#437884 Hyundai Buys Boston Dynamics for Nearly ...
This morning just after 3 a.m. ET, Boston Dynamics sent out a media release confirming that Hyundai Motor Group has acquired a controlling interest in the company that values Boston Dynamics at US $1.1 billion:
Under the agreement, Hyundai Motor Group will hold an approximately 80 percent stake in Boston Dynamics and SoftBank, through one of its affiliates, will retain an approximately 20 percent stake in Boston Dynamics after the closing of the transaction.
The release is very long, but does have some interesting bits—we’ll go through them, and talk about what this might mean for both Boston Dynamics and Hyundai.
We’ve asked Boston Dynamics for comment, but they’ve been unusually quiet for the last few days (I wonder why!). So at this point just keep in mind that the only things we know for sure are the ones in the release. If (when?) we hear anything from either Boston Dynamics or Hyundai, we’ll update this post.
The first thing to be clear on is that the acquisition is split between Hyundai Motor Group’s affiliates, including Hyundai Motor, Hyundai Mobis, and Hyundai Glovis. Hyundai Motor makes cars, Hyundai Mobis makes car parts and seems to be doing some autonomous stuff as well, and Hyundai Glovis does logistics. There are many other groups that share the Hyundai name, but they’re separate entities, at least on paper. For example, there’s a Hyundai Robotics, but that’s part of Hyundai Heavy Industries, a different company than Hyundai Motor Group. But for this article, when we say “Hyundai,” we’re talking about Hyundai Motor Group.
What’s in it for Hyundai?
Let’s get into the press release, which is filled with press release-y terms like “synergies” and “working together”—you can view the whole thing here—but still has some parts that convey useful info.
By establishing a leading presence in the field of robotics, the acquisition will mark another major step for Hyundai Motor Group toward its strategic transformation into a Smart Mobility Solution Provider. To propel this transformation, Hyundai Motor Group has invested substantially in development of future technologies, including in fields such as autonomous driving technology, connectivity, eco-friendly vehicles, smart factories, advanced materials, artificial intelligence (AI), and robots.
If Hyundai wants to be a “Smart Mobility Solution Provider” with a focus on vehicles, it really seems like there’s a whole bunch of other ways they could have spent most of a billion dollars that would get them there quicker. Will Boston Dynamics’ expertise help them develop autonomous driving technology? Sure, I guess, but why not just buy an autonomous car startup instead? Boston Dynamics is more about “robots,” which happens to be dead last on the list above.
There was some speculation a couple of weeks ago that Hyundai was going to try and leverage Boston Dynamics to make a real version of this hybrid wheeled/legged concept car, so if that’s what Hyundai means by “Smart Mobility Solution Provider,” then I suppose the Boston Dynamics acquisition makes more sense. Still, I think that’s unlikely, because it’s just a concept car, after all.
In addition to “smart mobility,” which seems like a longer-term goal for Hyundai, the company also mentions other, more immediate benefits from the acquisition:
Advanced robotics offer opportunities for rapid growth with the potential to positively impact society in multiple ways. Boston Dynamics is the established leader in developing agile, mobile robots that have been successfully integrated into various business operations. The deal is also expected to allow Hyundai Motor Group and Boston Dynamics to leverage each other’s respective strengths in manufacturing, logistics, construction and automation.
“Successfully integrated” might be a little optimistic here. They’re talking about Spot, of course, but I think the best you could say at this point is that Spot is in the middle of some promising pilot projects. Whether it’ll be successfully integrated in the sense that it’ll have long-term commercial usefulness and value remains to be seen. I’m optimistic about this as well, but Spot is definitely not there yet.
What does probably hold a lot of value for Hyundai is getting Spot, Pick, and perhaps even Handle into that “manufacturing, logistics, construction” stuff. This is the bread and butter for robots right now, and Boston Dynamics has plenty of valuable technology to offer in those spaces.
Photo: Bob O’Connor
Boston Dynamics is selling Spot for $74,500, shipping included.
Betting on Spot and Pick
With Boston Dynamics founder Marc Raibert’s transition to Chairman of the company, the CEO position is now occupied by Robert Playter, the long-time VP of engineering and more recently COO at Boston Dynamics. Here’s his statement from the release:
“Boston Dynamics’ commercial business has grown rapidly as we’ve brought to market the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility. We and Hyundai share a view of the transformational power of mobility and look forward to working together to accelerate our plans to enable the world with cutting edge automation, and to continue to solve the world’s hardest robotics challenges for our customers.”
Whether Spot is in fact “the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility” on the market is perhaps something that could be argued against, although I won’t. Whether or not it was the first robot that can do these kinds of things, it’s definitely not the only robot that do these kinds of things, and going forward, it’s going to be increasingly challenging for Spot to maintain its uniqueness.
For a long time, Boston Dynamics totally owned the quadruped space. Now, they’re one company among many—ANYbotics and Unitree are just two examples of other quadrupeds that are being successfully commercialized. Spot is certainly very capable and easy to use, and we shouldn’t underestimate the effort required to create a robot as complex as Spot that can be commercially used and supported. But it’s not clear how long they’ll maintain that advantage, with much more affordable platforms coming out of Asia, and other companies offering some unique new capabilities.
Photo: Boston Dynamics
Boston Dynamics’ Handle is an all-electric robot featuring a leg-wheel hybrid mobility system, a manipulator arm with a vacuum gripper, and a counterbalancing tail.
Boston Dynamics’ picking system, which stemmed from their 2019 acquisition of Kinema Systems, faces the same kinds of challenges—it’s very good, but it’s not totally unique.
Boston Dynamics produces highly capable mobile robots with advanced mobility, dexterity and intelligence, enabling automation in difficult, dangerous, or unstructured environments. The company launched sales of its first commercial robot, Spot in June of 2020 and has since sold hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining. Boston Dynamics plans to expand the Spot product line early next year with an enterprise version of the robot with greater levels of autonomy and remote inspection capabilities, and the release of a robotic arm, which will be a breakthrough in mobile manipulation.
Boston Dynamics is also entering the logistics automation market with the industry leading Pick, a computer vision-based depalletizing solution, and will introduce a mobile robot for warehouses in 2021.
Huh. We’ll be trying to figure out what “greater levels of autonomy” means, as well as whether the “mobile robot for warehouses” is Handle, or something more like an autonomous mobile robot (AMR) platform. I’d honestly be surprised if Handle was ready for work outside of Boston Dynamics next year, and it’s hard to imagine how Boston Dynamics could leverage their expertise into the AMR space with something that wouldn’t just seem… Dull, compared to what they usually do. I hope to be surprised, though!
A new deep-pocketed benefactor
Hyundai Motor Group’s decision to acquire Boston Dynamics is based on its growth potential and wide range of capabilities.
“Wide range of capabilities” we get, but that other phrase, “growth potential,” has a heck of a lot wrapped up in it. At the moment, Boston Dynamics is nowhere near profitable, as far as we know. SoftBank acquired Boston Dynamics in 2017 for between one hundred and two hundred million, and over the last three years they’ve poured hundreds of millions more into Boston Dynamics.
Hyundai’s 80 percent stake just means that they’ll need to take over the majority of that support, and perhaps even increase it if Boston Dynamics’ growth is one of their primary goals. Hyundai can’t have a reasonable expectation that Boston Dynamics will be profitable any time soon; they’re selling Spots now, but it’s an open question whether Spot will manage to find a scalable niche in which it’ll be useful in the sort of volume that will make it a sustainable commercial success. And even if it does become a success, it seems unlikely that Spot by itself will make a significant dent in Boston Dynamics’ burn rate anytime soon. Boston Dynamics will have more products of course, but it’s going to take a while, and Hyundai will need to support them in the interim.
Depending on whether Hyundai views Boston Dynamics as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the
next Atlas, when the
current one still seems so far from commercialization
It’s become clear that to sustain itself, Boston Dynamics needs a benefactor with very deep pockets and a long time horizon. Initially, Boston Dynamics’ business model (or whatever you want to call it) was to do bespoke projects for defense-ish folks like DARPA, but from what we understand Boston Dynamics stopped that sort of work after Google acquired them back in 2013. From one perspective, that government funding did exactly what it was supposed to do, which was to fund the development of legged robots through low TRLs (technology readiness levels) to the point where they could start to explore commercialization.
The question now, though, is whether Hyundai is willing to let Boston Dynamics undertake the kinds of low-TRL, high-risk projects that led from BigDog to LS3 to Spot, and from PETMAN to DRC Atlas to the current Atlas. So will Hyundai be cool about the whole thing and be the sort of benefactor that’s willing to give Boston Dynamics the resources that they need to keep doing what they’re doing, without having to answer too many awkward questions about things like practicality and profitability? Hyundai can certainly afford to do this, but so could SoftBank, and Google—the question is whether Hyundai will want to, over the length of time that’s required for the development of the kind of ultra-sophisticated robotics hardware that Boston Dynamics specializes in.
To put it another way: Depending whether Hyundai’s perspective on Boston Dynamics is as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the next Atlas, when the current one still seems so far from commercialization.
Google, SoftBank, now Hyundai
Boston Dynamics possesses multiple key technologies for high-performance robots equipped with perception, navigation, and intelligence.
Hyundai Motor Group’s AI and Human Robot Interaction (HRI) expertise is highly synergistic with Boston Dynamics’s 3D vision, manipulation, and bipedal/quadruped expertise.
As it turns out, Hyundai Motors does have its own robotics lab, called Hyundai Motors Robotics Lab. Their website is not all that great, but here’s a video from last year:
I’m not entirely clear on what Hyundai means when they use the word “synergistic” when they talk about their robotics lab and Boston Dynamics, but it’s a little bit concerning. Usually, when a big company buys a little company that specializes in something that the big company is interested in, the idea is that the little company, to some extent, will be absorbed into the big company to give them some expertise in that area. Historically, however, Boston Dynamics has been highly resistant to this, maintaining its post-acquisition independence and appearing to be very reluctant to do anything besides what it wants to do, at whatever pace it wants to do it, and as by itself as possible.
From what we understand, Boston Dynamics didn’t integrate particularly well with Google’s robotics push in 2013, and we haven’t seen much evidence that SoftBank’s experience was much different. The most direct benefit to SoftBank (or at least the most visible one) was the addition of a fleet of Spot robots to the SoftBank Hawks baseball team cheerleading squad, along with a single (that we know about) choreographed gymnastics routine from an Atlas robot that was only shown on video.
And honestly, if you were a big manufacturing company with a bunch of money and you wanted to build up your own robotics program quickly, you’d probably have much better luck picking up some smaller robotics companies who were a bit less individualistic and would probably be more amenable to integration and would cost way less than a billion dollars-ish. And if integration is ultimately Hyundai’s goal, we’ll be very sad, because it’ll likely signal the end of Boston Dynamics doing the unfettered crazy stuff that we’ve grown to love.
Photo: Bob O’Connor
Possibly the most agile humanoid robot ever built, Atlas can run, climb, jump over obstacles, and even get up after a fall.
Boston Dynamics contemplates its future
The release ends by saying that the transaction is “subject to regulatory approvals and other customary closing conditions” and “is expected to close by June of 2021.” Again, you can read the whole thing here.
My initial reaction is that, despite the “synergies” described by Hyundai, it’s certainly not immediately obvious why the company wants to own 80 percent of Boston Dynamics. I’d also like a better understanding of how they arrived at the $1.1 billion valuation. I’m not saying this because I don’t believe in what Boston Dynamics is doing or in the inherent value of the company, because I absolutely do, albeit perhaps in a slightly less tangible sense. But when you start tossing around numbers like these, a big pile of expectations inevitably comes along with them. I hope that Boston Dynamics is unique enough that the kinds of rules that normally apply to robotics companies (or companies in general) can be set aside, at least somewhat, but I also worry that what made Boston Dynamics great was the explicit funding for the kinds of radical ideas that eventually resulted in robots like Atlas and Spot.
Can Hyundai continue giving Boston Dynamics the support and freedom that they need to keep doing the kinds of things that have made them legendary? I certainly hope so. Continue reading →