Tag Archives: shared
#439000 Can AI Stop People From Believing Fake ...
Machine learning algorithms provide a way to detect misinformation based on writing style and how articles are shared.
On topics as varied as climate change and the safety of vaccines, you will find a wave of misinformation all over social media. Trust in conventional news sources may seem lower than ever, but researchers are working on ways to give people more insight on whether they can believe what they read. Researchers have been testing artificial intelligence (AI) tools that could help filter legitimate news. But how trustworthy is AI when it comes to stopping the spread of misinformation?
Researchers at the Rensselaer Polytechnic Institute (RPI) and the University of Tennessee collaborated to study the role of AI in helping people identify whether the news they’re reading is legitimate or not.
The research paper, “Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments,” was published in Computers in Human Behavior Reports. It discussed how crowdsourcing marketplace Amazon Mechanical Turk (AMT) can be used to identify misinformation for fresh news and specific heuristics, which are rules of thumb used to process information and consider its veracity. In other words, heuristics are essentially “shortcuts for decisions,” explained Dorit Nevo, an associate professor at RPI’s Lally School of Management and a lead author for the paper.
The study found that AI would be successful in flagging false stories only if the reader did not already have an opinion on the topic, Nevo said. When study subjects were set in their beliefs, confirmation bias kept them from reassessing their views.
Nevo said the first part of the project focused on whether subjects could detect misinformation around climate change and vaccines like the one designed to prevent chicken pox. Then, beginning in April 2020, her team studied how people responded to news related to COVID-19.
“With COVID-19, there was a significant difference,” Nevo said. They found that about 72 percent of respondents could identify misinformation about the coronavirus without heuristic clues, and roughly 93 percent were able to be convinced by the researcher’s heuristics that the content was fake.
Examples of heuristic clues include text with too many capital letters or the use of strong language, Nevo said.
There were two types of heuristics mentioned in the team’s paper: objective heuristics and source heuristics. They put a statement at the top of each article the subjects read; it instructed them to read the article and indicate whether they believed its central thesis.
“We either put a statement that says the AI finds this article reliable and accurate based on the objective heuristics, or we said the AI finds the source reliable,” Nevo said. “So that's the source heuristic.”
In her research on heuristics, Nevo found that people’s thinking takes one of two paths: The first path is to read the article, think about it and decide if they believe it; the second is to consider the source and what others think about the news, and decide whether to believe it before reading it.
Image: Dorit Nevo/RPI/IEEE Spectrum
Researchers at RPI researched the role of heuristics and AI in detecting whether people thought news was credible
Another research paper, “Timing Matters When Correcting Fake News,” published in the Proceedings of the National Academy of Science by researchers at Harvard University, differed from the RPI researchers in its findings. While Nevo and her collaborators found that it’s easier to convince people that a story is fake news before reading it, the Harvard researchers, led by Nadia M. Brashier, a psychologist and neuroscientist, discovered that a fact-check can convince people of misinformation even after reading headlines. When study subjects read true or false labels after reading a headline, that resulted in a 25.3 percent reduction in “subsequent misclassification,” when compared to headlines with no tag, Brashier and her team found.
In the end, fighting misinformation will require both computing and human efforts such as policy changes, says Benjamin D. Horne, an assistant professor of Information Sciences at the University of Tennessee and one of Nevo’s co-authors. He says the RPI-Tennessee work was inspired by AI tools he designed previously. Horne was previously a research assistant at RPI, where he developed machine learning (ML) algorithms that can detect partial truths as well as decontextualized truths and out-of-date information.
“Our algorithms are trained on source-level behavior, both when using the textual content of an article and the network of other news sources that it draws news from,” Horne said. “We have found that these two types of features together are quite good at distinguishing between sources labeled as reliable or unreliable by external news source ratings.”
The machine learning algorithms analyze the writing style and the content-sharing behavior of news outlets, Horne said. Researchers trained a supervised ML algorithm called Random Forest, a classification algorithm that uses decision trees.
AI for Detecting Fake News
So, what’s the potential for AI to be successful in detecting misinformation?
“The tools we have developed, and other tools developed in this area, have fairly high accuracy in lab settings,” says Horne. “For example, our most recent technical work showed around 83% accuracy in predicting when the source of a news article is reliable or unreliable.”
Despite the effectiveness of algorithms, old-fashioned fact-checking by journalists will still be required to combat fake news. AI could filter the information for fact-checkers to verify, according to Horne.
“AI tools are great at dealing with high quantities of information at fast speeds but lack the nuanced analysis that a journalist or fact-checker can provide,” Horne said. “I see a future where the two work together.” Continue reading →
#438769 Will Robots Make Good Friends? ...
In the 2012 film Robot and Frank, the protagonist, a retired cat burglar named Frank, is suffering the early symptoms of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do household chores like cooking and cleaning, and remind Frank to take his medicine. It’s a robot the likes of which we’re getting closer to building in the real world.
The film follows Frank, who is initially appalled by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond between man and machine, such that Frank is protective of the robot when the pair of them run into trouble.
This is, of course, a fictional story, but it challenges us to explore different kinds of human-to-robot bonds. My recent research on human-robot relationships examines this topic in detail, looking beyond sex robots and robot love affairs to examine that most profound and meaningful of relationships: friendship.
My colleague and I identified some potential risks, like the abandonment of human friends for robotic ones, but we also found several scenarios where robotic companionship can constructively augment people’s lives, leading to friendships that are directly comparable to human-to-human relationships.
Philosophy of Friendship
The robotics philosopher John Danaher sets a very high bar for what friendship means. His starting point is the “true” friendship first described by the Greek philosopher Aristotle, which saw an ideal friendship as premised on mutual good will, admiration, and shared values. In these terms, friendship is about a partnership of equals.
Building a robot that can satisfy Aristotle’s criteria is a substantial technical challenge and is some considerable way off, as Danaher himself admits. Robots that may seem to be getting close, such as Hanson Robotics’ Sophia, base their behavior on a library of pre-prepared responses: a humanoid chatbot, rather than a conversational equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.
Aristotle also talked about other forms of “imperfect” friendship, such as “utilitarian” and “pleasure” friendships, which are considered inferior to true friendship because they don’t require symmetrical bonding and are often to one party’s unequal benefit. This form of friendship sets a relatively very low bar which some robots, like “sexbots” and robotic pets, clearly already meet.
Artificial Amigos
For some, relating to robots is just a natural extension of relating to other things in our world, like people, pets, and possessions. Psychologists have even observed how people respond naturally and socially towards media artefacts like computers and televisions. Humanoid robots, you’d have thought, are more personable than your home PC.
However, the field of “robot ethics” is far from unanimous on whether we can—or should— develop any form of friendship with robots. For an influential group of UK researchers who charted a set of “ethical principles of robotics,” human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is dishonest and should be treated with caution, if not alarm. For these researchers, wasting emotional energy on entities that can only simulate emotions will always be less rewarding than forming human-to-human bonds.
But people are already developing bonds with basic robots, like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A surprisingly large number of people give these robots pet names—something they don’t do with their dishwashers. Some even take their cleaning robots on holiday.
Other evidence of emotional bonds with robots include the Shinto blessing ceremony for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “Boomer” after it was destroyed in action.
These stories, and the psychological evidence we have so far, make clear that we can extend emotional connections to things that are very different to us, even when we know they are manufactured and pre-programmed. But do those connections constitute a friendship comparable to that shared between humans?
True Friendship?
A colleague and I recently reviewed the extensive literature on human-to-human relationships to try to understand how, and if, the concepts we found could apply to bonds we might form with robots. We found evidence that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.
We noted a wide range of human-to-human relationships, from relatives and lovers to parents, carers, service providers, and the intense (but unfortunately one-way) relationships we maintain with our celebrity heroes. Few of these relationships could be described as completely equal and, crucially, they are all destined to evolve over time.
All this means that expecting robots to form Aristotelian bonds with us is to set a standard even human relationships fail to live up to. We also observed forms of social connectedness that are rewarding and satisfying and yet are far from the ideal friendship outlined by the Greek philosopher.
We know that social interaction is rewarding in its own right, and something that, as social mammals, humans have a strong need for. It seems probable that relationships with robots could help to address the deep-seated urge we all feel for social connection—like providing physical comfort, emotional support, and enjoyable social exchanges—currently provided by other humans.
Our paper also discussed some potential risks. These arise particularly in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot—in a care setting, for instance.
These are important concerns, but they’re possibilities and not inevitabilities. In the literature we reviewed we actually found evidence of the opposite effect: robots acting to scaffold social interactions with others, acting as ice-breakers in groups, and helping people to improve their social skills or to boost their self-esteem.
It appears likely that, as time progresses, many of us will simply follow Frank’s path towards acceptance: scoffing at first, before settling into the idea that robots can make surprisingly good companions. Our research suggests that’s already happening—though perhaps not in a way of which Aristotle would have approved.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Andy Kelly on Unsplash Continue reading →
#438080 Boston Dynamics’ Spot Robot Is Now ...
Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.
As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.
Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.
Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:
Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:
A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?
Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:
This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.
IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?
Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.
We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.
When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?
All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.
One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.
The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.
So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?
There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.
The video of Spot digging was pretty cool—how did that work?
That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.
The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?
A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.
Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.
Is Spot’s arm safe?
You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.
We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?
You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”
It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.
Photo: Boston Dynamics
There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.
During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.
The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”
Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.
Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”
Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.
There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading →
#438076 Boston Dynamics’ Spot Robot Is Now ...
Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.
As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.
Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.
Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:
Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:
A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?
Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:
This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.
IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?
Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.
We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.
When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?
All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.
One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.
The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.
So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?
There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.
The video of Spot digging was pretty cool—how did that work?
That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.
The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?
A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.
Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.
Is Spot’s arm safe?
You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.
We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?
You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”
It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.
Photo: Boston Dynamics
There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.
During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.
The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”
Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.
Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”
Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.
There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading →
#437912 “Boston Dynamics Will Continue to ...
Last week’s announcement that Hyundai acquired Boston Dynamics from SoftBank left us with a lot of questions. We attempted to answer many of those questions ourselves, which is typically bad practice, but sometimes it’s the only option when news like that breaks.
Fortunately, yesterday we were able to speak with Michael Patrick Perry, vice president of business development at Boston Dynamics, who candidly answered our questions about Boston Dynamics’ new relationship with Hyundai and what the near future has in store.
IEEE Spectrum: Boston Dynamics is worth 1.1 billion dollars! Can you put that valuation into context for us?
Michael Patrick Perry: Since 2018, we’ve shifted to becoming a commercial organization. And that’s included a number of things, like taking our existing technology and bringing it to market for the first time. We’ve gone from zero to 400 Spot robots deployed, building out an ecosystem of software developers, sensor providers, and integrators. With that scale of deployment and looking at the pipeline of opportunities that we have lined up over the next year, I think people have started to believe that this isn’t just a one-off novelty—that there’s actual value that Spot is able to create. Secondly, with some of our efforts in the logistics market, we’re getting really strong signals both with our Pick product and also with some early discussions around Handle’s deployment in warehouses, which we think are going to be transformational for that industry.
So, the thing that’s really exciting is that two years ago, we were talking about this vision, and people said, “Wow, that sounds really cool, let’s see how you do.” And now we have the validation from the market saying both that this is actually useful, and that we’re able to execute. And that’s where I think we’re starting to see belief in the long-term viability of Boston Dynamics, not just as a cutting-edge research shop, but also as a business.
Photo: Boston Dynamics
Boston Dynamics says it has deployed 400 Spot robots, building out an “ecosystem of software developers, sensor providers, and integrators.”
How would you describe Hyundai’s overall vision for the future of robotics, and how do they want Boston Dynamics to fit into that vision?
In the immediate term, Hyundai’s focus is to continue our existing trajectories, with Spot, Handle, and Atlas. They believe in the work that we’ve done so far, and we think that combining with a partner that understands many of the industries in which we’re targeting, whether its manufacturing, construction, or logistics, can help us improve our products. And obviously as we start thinking about producing these robots at scale, Hyundai’s expertise in manufacturing is going to be really helpful for us.
Looking down the line, both Boston Dynamics and Hyundai believe in the value of smart mobility, and they’ve made a number of plays in that space. Whether it’s urban air mobility or autonomous driving, they’ve been really thinking about connecting the digital and the physical world through moving systems, whether that’s a car, a vertical takeoff and landing multi-rotor vehicle, or a robot. We are well positioned to take on robotics side of that while also connecting to some of these other autonomous services.
Can you tell us anything about the kind of robotics that the Hyundai Motor Group has going on right now?
So they’re working on a lot of really interesting stuff—exactly how that connects, you know, it’s early days, and we don’t have anything explicitly to share. But they’ve got a smart and talented robotics team that’s working in a variety of directions that shares overlap with us. Obviously, a lot of things related to autonomous driving shares some DNA with the work that we’re doing in autonomy for Spot and Handle, so it’s pretty exciting to see.
What are you most excited about here? How do you think this deal will benefit Boston Dynamics?
I think there are a number of things. One is that they have an expertise in hardware, in a way that’s unique. They understand and appreciate the complexity of creating large complex robotic systems. So I think there’s some shared understanding of what it takes to create a great hardware product. And then also they have the resources to help us actually build those products with them together—they have manufacturing resources and things like that.
“Robotics isn’t a short term game. We’ve scaled pretty rapidly but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision”
Another thing that’s exciting is that Hyundai has some pretty visionary bets for autonomous driving and unmanned aerial systems, and all of that fits very neatly into the connected vision of robotics that we were talking about before. Robotics isn’t a short term game. We’ve scaled pretty rapidly for a robotics company in terms of the scale of robots we’ve able to deploy in the field, but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision.
And when you’ve been talking with Hyundai, what are they most excited about?
I think they’re really excited about our existing products and our technology. Looking at some of the things that Spot, Pick, and Handle are able to do now, there are applications that many of Hyundai’s customers could benefit from in terms of mobility, remote sensing, and material handling. Looking down the line, Hyundai is also very interested in smart city technology, and mobile robotics is going to be a core piece of that.
We tend to focus on Spot and Handle and Atlas in terms of platform capabilities, but can you talk a bit about some of the component-level technology that’s unique to Boston Dynamics, and that could be of interest to Hyundai?
Creating very power-dense actuator design is something that we’ve been successful at for several years, starting back with BigDog and LS3. And Handle has some hydraulic actuators and valves that are pretty unique in terms of their design and capability. Fundamentally, we have a systems engineering approach that brings together both hardware and software internally. You’ll often see different groups that specialize in something, like great mechanical or electrical engineering groups, or great controls teams, but what I think makes Boston Dynamics so special is that we’re able to put everything on the table at once to create a system that’s incredibly capable. And that’s why with something like Spot, we’re able to produce it at scale, while also making it flexible enough for all the different applications that the robot is being used for right now.
It’s hard to talk specifics right now, but there are obviously other disciplines within mechanical engineering or electrical engineering or controls for robots or autonomous systems where some of our technology could be applied.
Photo: Boston Dynamics
Boston Dynamics is in the process of commercializing Handle, iterating on its design and planning to get box-moving robots on-site with customers in the next year or two.
While Boston Dynamics was part of Google, and then SoftBank, it seems like there’s been an effort to maintain independence. Is it going to be different with Hyundai? Will there be more direct integration or collaboration?
Obviously it’s early days, but right now, we have support to continue executing against all the plans that we have. That includes all the commercialization of Spot, as well as things for Atlas, which is really going to be pushing the capability of our team to expand into new areas. That’s going to be our immediate focus, and we don’t see anything that’s going to pull us away from that core focus in the near term.
As it stands right now, Boston Dynamics will continue to be Boston Dynamics under this new ownership.
How much of what you do at Boston Dynamics right now would you characterize as fundamental robotics research, and how much is commercialization? And how do you see that changing over the next couple of years?
We have been expanding our commercial team, but we certainly keep a lot of the core capabilities of fundamental robotics research. Some of it is very visible, like the new behavior development for Atlas where we’re pushing the limits of perception and path planning. But a lot of the stuff that we’re working on is a little bit under the hood, things that are less obvious—terrain handling, intervention handling, how to make safe faults, for example. Initially when Spot started slipping on things, it would flail around trying to get back up. We’ve had to figure out the right balance between the robot struggling to stand, and when it should decide to just lock its limbs and fall over because it’s safer to do that.
I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us. So we’ve been ramping up a lot of work over the last several years trying to get to an early but still valuable iteration of the technology, and we’ll continue pushing on that as we start learning what’s most useful to our customers.
“I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us”
Looking back, Spot as a commercial robot has a history that goes back to robots like LS3 and BigDog, which were very ambitious projects funded by agencies like DARPA without much in the way of commercial expectations. Do you think these very early stage, very expensive, very technical projects are still things that Boston Dynamics can take on?
Yes—I would point to a lot of the things we do with Atlas as an example of that. While we don’t have immediate plans to commercialize Atlas, we can point to technologies that come out of Atlas that have enabled some of our commercial efforts over time. There’s not necessarily a clear roadmap of how every piece of Atlas research is going to feed over into a commercial product; it’s more like, this is a really hard fundamental robotics challenge, so let’s tackle it and learn things that we can then benefit from across the company.
And fundamentally, our team loves doing cool stuff with robots, and you’ll continue seeing that in the months to come.
Photo: Boston Dynamics
Spot’s arm with gripper is coming out early next year, and Boston Dynamics says that’s going to “unlock a new set of capabilities for us.”
What would it take to commercialize Atlas? And are you getting closer with Handle?
We’re in the process of commercializing Handle. We’re at a relatively early stage, but we have a plan to get the first versions for box moving on-site with customers in the next year or two. Last year, we did some on-site deployments as proof-of-concept trials, and using the feedback from that, we did a new design pass on the robot, and we’re looking at increasing our manufacturing capability. That’s all in progress.
For Atlas, it’s like the Formula 1 of robots—you’re not going to take a Formula 1 car and try to make it less capable so that you can drive it on the road. We’re still trying to see what are some applications that would necessitate an energy and computationally intensive humanoid robot as opposed to something that’s more inherently stable. Trying to understand that application space is something that we’re interested in, and then down the line, we could look at creating new morphologies to help address specific applications. In many ways, Handle is the first version of that, where we said, “Atlas is good at moving boxes but it’s very complicated and expensive, so let’s create a simpler and smaller design that can achieve some of the same things.”
The press release mentioned a mobile robot for warehouses that will be introduced next year—is that Handle?
Yes, that’s the work that we’re doing on Handle.
As we start thinking about a whole robotic solution for the warehouse, we have to look beyond a high power, low footprint, dynamic platform like Handle and also consider things that are a little less exciting on video. We need a vision system that can look at a messy stack of boxes and figure out how to pick them up, we need an interface between a robot and an order building system—things where people might question why Boston Dynamics is focusing on them because it doesn’t fit in with our crazy backflipping robots, but it’s really incumbent on us to create that full end-to-end solution.
Are you confident that under Hyundai’s ownership, Boston Dynamics will be able to continue taking the risks required to remain on the cutting edge of robotics?
I think we will continue to push the envelope of what robots are capable of, and I think in the near term, you’ll be able to see that realized in our products and the research that we’re pushing forward with. 2021 is going to be a great year for us. Continue reading →