Tag Archives: before

#437851 Boston Dynamics’ Spot Robot Dog ...

Boston Dynamics has been fielding questions about when its robots are going to go on sale and how much they’ll cost for at least a dozen years now. I can say this with confidence, because that’s how long I’ve been a robotics journalist, and I’ve been pestering them about it the entire time. But it’s only relatively recently that the company started to make a concerted push away from developing robots exclusively for the likes of DARPA into platforms with more commercial potential, starting with a compact legged robot called Spot, first introduced in 2016.

Since then, we’ve been following closely as Spot has gone from a research platform to a product, and today, Boston Dynamics is announcing the final step in that process: commercial availability. You can now order a Spot Explorer Kit from the Boston Dynamics online store for US $74,500 (plus tax), shipping included, with delivery in 6 to 8 weeks. FINALLY!

Over the past 10 months or so, Boston Dynamics has leased Spot robots to carefully selected companies, research groups, and even a few individuals as part of their early adopter program—that’s where all of the clips in the video below came from. While there are over 100 Spots out in the world right now, getting one of them has required convincing Boston Dynamics up front that you knew more or less exactly what you wanted to do and how you wanted to do it. If you’re a big construction company or the Jet Propulsion Laboratory or Adam Savage, that’s all well and good, but for other folks who think that a Spot could be useful for them somehow and want to give it a shot, this new availability provides a fewer-strings attached opportunity to do some experimentation with the robot.

There’s a lot of cool stuff going on in that video, but we were told that the one thing that really stood out to the folks at Boston Dynamics was a 2-second clip that you can see on the left-hand side of the screen from 0:19 to 0:21. In it, Spot is somehow managing to walk across a spider web of rebar without getting tripped up, at faster than human speed. This isn’t something that Spot was specifically programmed to do, and in fact the Spot User Guide specifically identifies “rebar mesh” as an unsafe operating environment. But the robot just handles it, and that’s a big part of what makes Spot so useful—its ability to deal with (almost) whatever you can throw at it.

Before you get too excited, Boston Dynamics is fairly explicit that the current license for the robot is intended for commercial use, and the company specifically doesn’t want people to be just using it at home for fun. We know this because we asked (of course we asked), and they told us “we specifically don’t want people to just be using it at home for fun.” Drat. You can still buy one as an individual, but you have to promise that you’ll follow the terms of use and user guidelines, and it sounds like using a robot in your house might be the second-fastest way to invalidate your warranty:

SPOT IS AN AMAZING ROBOT, BUT IS NOT CERTIFIED SAFE FOR IN-HOME USE OR INTENDED FOR USE NEAR CHILDREN OR OTHERS WHO MAY NOT APPRECIATE THE HAZARDS ASSOCIATED WITH ITS OPERATION.

Not being able to get Spot to play with your kids may be disappointing, but for those of you with the sort of kids who are also students, the good news is that Boston Dynamics has carved out a niche for academic institutions, which can buy Spot at a discounted price. And if you want to buy a whole pack of Spots, there’s a bulk discount for Enterprise users as well.

What do you get for $74,500? All this!

Spot robot
Spot battery (2x)
Spot charger
Tablet controller and charger
Robot case for storage and transportation
FREE SHIPPING!

Photo: Boston Dynamics

The basic package includes the robot, two batteries, charger, a tablet controller, and a storage case.

You can view detailed specs here.

So is $75k a lot of money for a robot like Spot, or not all that much? We don’t have many useful points of comparison, partially because it’s not clear to what extent other pre-commercial quadrupedal robots (like ANYmal or Aliengo) share capabilities and features with Spot. For more perspective on Spot’s price tag, we spoke to Michael Perry, vice president of business development at Boston Dynamics.

IEEE Spectrum: Why is Spot so affordable?

Michael Perry: The main goal of selling the robot at this stage is to try to get it into the hands of as many application developers as possible, so that we can learn from the community what the biggest driver of value is for Spot. As a platform, unlocking the value of an ecosystem is our core focus right now.

Spectrum: Why is Spot so expensive?

Perry: Expensive is relative, but compared to the initial prototypes of Spot, we’ve been able to drop down the cost pretty significantly. One key thing has been designing it for robustness—we’ve put hundreds and hundreds of hours on the robot to make sure that it’s able to be successful when it falls, or when it has an electrostatic discharge. We’ve made sure that it’s able to perceive a wide variety of environments that are difficult for traditional vision-based sensors to handle. A lot of that engineering is baked into the core product so that you don’t have to worry about the mobility or robotic side of the equation, you can just focus on application development.

Photos: Boston Dynamics

Accessories for Spot include [clockwise from top left]: Spot GXP with additional ports for payload integration; Spot CAM with panorama camera and advanced comms; Spot CAM+ with pan-tilt-zoom camera for inspections; Spot EAP with lidar to enhance autonomy on large sites; Spot EAP+ with Spot CAM camera plus lidar; and Spot CORE for additional processing power.

The $75k that you’ll pay for the Spot Explorer Kit, it’s important to note, is just the base price for the robot. As with other things that fall into this price range (like a luxury car), there are all kinds of fun ways to drive that cost up with accessories, although for Spot, some of those accessories will be necessary for many (if not most) applications. For example, a couple of expansion ports to make it easier to install your own payloads on Spot will run you $1,275. An additional battery is $4,620. And if you want to really get some work done, the Enhanced Autonomy Package (with 360 cameras, lights, better comms, and a Velodyne VLP-16) will set you back an additional $34,570. If you were hoping for an arm, you’ll have to wait until the end of the year.

Each Spot also includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff” or “I tried to take my robot swimming.” For that sort of thing (user error) to be covered, you’ll need to upgrade to the $12,000 Spot CARE premium service plan to cover your robot for a year as long as you don’t subject it to willful abuse, which both of those examples I just gave probably qualify as.

While we’re on the subject of robot abuse, Boston Dynamics has very sensibly devoted a substantial amount of the Spot User Guide to help new users understand how they should not be using their robot, in order to “lessen the risk of serious injury, death, or robot and other property damage.” According to the guide, some things that could cause Spot to fall include holes, cliffs, slippery surfaces (like ice and wet grass), and cords. Spot’s sensors also get confused by “transparent, mirrored, or very bright obstacles,” and the guide specifically says Spot “may crash into glass doors and windows.” Also this: “Spot cannot predict trajectories of moving objects. Do not operate Spot around moving objects such as vehicles, children, or pets.”

We should emphasize that this is all totally reasonable, and while there are certainly a lot of things to be aware of, it’s frankly astonishing that these are the only things that Boston Dynamics explicitly warns users against. Obviously, not every potentially unsafe situation or thing is described above, but the point is that Boston Dynamics is willing to say to new users, “here’s your robot, go do stuff with it” without feeling the need to hold their hand the entire time.

There’s one more thing to be aware of before you decide to buy a Spot, which is the following:

“All orders will be subject to Boston Dynamics’ Terms and Conditions of Sale which require the beneficial use of its robots.”

Specifically, this appears to mean that you aren’t allowed to (or supposed to) use the robot in a way that could hurt living things, or “as a weapon, or to enable any weapon.” The conditions of sale also prohibit using the robot for “any illegal or ultra-hazardous purpose,” and there’s some stuff in there about it not being cool to use Spot for “nuclear, chemical, or biological weapons proliferation, or development of missile technology,” which seems weirdly specific.

“Once you make a technology more broadly available, the story of it starts slipping out of your hands. Our hope is that ahead of time we’re able to clearly articulate the beneficial uses of the robot in environments where we think the robot has a high potential to reduce the risk to people, rather than potentially causing harm.”
—Michael Perry, Boston Dynamics

I’m very glad that Boston Dynamics is being so upfront about requiring that Spot is used beneficially. However, it does put the company in a somewhat challenging position now that these robots are being sold. Boston Dynamics can (and will) perform some amount of due-diligence before shipping a Spot, but ultimately, once the robots are in someone else’s hands, there’s only so much that BD can do.

Spectrum: Why is beneficial use important to Boston Dynamics?

Perry: One of the key things that we’ve highlighted many times in our license and terms of use is that we don’t want to see the robot being used in any way that inflicts physical harm on people or animals. There are philosophical reasons for that—I think all of us don’t want to see our technology used in a way that would hurt people. But also from a business perspective, robots are really terrible at conveying intention. In order for the robot to be helpful long-term, it has to be trusted as a piece of technology. So rather than looking at a robot and wondering, “is this something that could potentially hurt me,” we want people to think “this is a robot that’s here to help me.” To the extent that people associate Boston Dynamics with cutting edge robots, we think that this is an important stance for the rollout of our first commercial product. If we find out that somebody’s violated our terms of use, their warranty is invalidated, we won’t repair their product, and we have a licensing timeout that would prevent them from accessing their robot after that timeout has expired. It’s a remediation path, but we do think that it’s important to at least provide that as something that helps enforce our position on use of our technology.

It’s very important to keep all of this in context: Spot is a tool. It’s got some autonomy and the appearance of agency, but it’s still just doing what people tell it to do, even if those things might be unsafe. If you read through the user guide, it’s clear how much of an effort Boston Dynamics is making to try to convey the importance of safety to Spot users—and ultimately, barring some unforeseen and catastrophic software or hardware issues, safety is about the users, rather than Boston Dynamics or Spot itself. I bring this up because as we start seeing more and more Spots doing things without Boston Dynamics watching over them quite so closely, accidents are likely inevitable. Spot might step on someone’s foot. It might knock someone over. If Spot was perfectly safe, it wouldn’t be useful, and we have to acknowledge that its impressive capabilities come with some risks, too.

Photo: Boston Dynamics

Each Spot includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff.”

Now that Spot is on the market for real, we’re excited to see who steps up and orders one. Depending on who the potential customer is, Spot could either seem like an impossibly sophisticated piece of technology that they’d never be able to use, or a magical way of solving all of their problems overnight. In reality, it’s of course neither of those things. For the former (folks with an idea but without a lot of robotics knowledge or experience), Spot does a lot out of the box, but BD is happy to talk with people and facilitate connections with partners who might be able to integrate specific software and hardware to get Spot to do a unique task. And for the latter (who may also be folks with an idea but without a lot of robotics knowledge or experience), BD’s Perry offers a reminder Spot is not Rosie the Robot, and would be equally happy to talk about what the technology is actually capable of doing.

Looking forward a bit, we asked Perry whether Spot’s capabilities mean that customers are starting to think beyond using robots to simply replace humans, and are instead looking at them as a way of enabling a completely different way of getting things done.

Spectrum: Do customers interested in Spot tend to think of it as a way of replacing humans at a specific task, or as a system that can do things that humans aren’t able to do?

Perry: There are what I imagine as three levels of people understanding the robot applications. Right now, we’re at level one, where you take a person out of this dangerous, dull job, and put a robot in. That’s the entry point. The second level is, using the robot, can we increase the production of that task? For example, take site documentation on a construction site—right now, people do 360 image capture of a site maybe once a week, and they might do a laser scan of the site once per project. At the second level, the question is, what if you were able to get that data collection every day, or multiple times a day? What kinds of benefits would that add to your process? To continue the construction example, the third level would be, how could we completely redesign this space now that we know that this type of automation is available? To take one example, there are some things that we cannot physically build because it’s too unsafe for people to be a part of that process, but if you were to apply robotics to that process, then you could potentially open up a huge envelope of design that has been inaccessible to people.

To order a Spot of your very own, visit shop.bostondynamics.com.

A version of this post appears in the August 2020 print issue as “$74,500 Will Fetch You a Spot.” Continue reading

Posted in Human Robots

#437820 In-Shoe Sensors and Mobile Robots Keep ...

In shoe sensor

Researchers at Stevens Institute of Technology are leveraging some of the newest mechanical and robotic technologies to help some of our oldest populations stay healthy, active, and independent.

Yi Guo, professor of electrical and computer engineering and director of the Robotics and Automation Laboratory, and Damiano Zanotto, assistant professor of mechanical engineering, and director of the Wearable Robotic Systems Laboratory, are collaborating with Ashley Lytle, assistant professor in Stevens’ College of Arts and Letters, and Ashwini K. Rao of Columbia University Medical Center, to combine an assistive mobile robot companion with wearable in-shoe sensors in a system designed to help elderly individuals maintain the balance and motion they need to thrive.

“Balance and motion can be significant issues for this population, and if elderly people fall and experience an injury, they are less likely to stay fit and exercise,” Guo said. “As a consequence, their level of fitness and performance decreases. Our mobile robot companion can help decrease the chances of falling and contribute to a healthy lifestyle by keeping their walking function at a good level.”

The mobile robots are designed to lead walking sessions and using the in-shoe sensors, monitor the user’s gait, indicate issues, and adjust the exercise speed and pace. The initiative is part of a four-year National Science Foundation research project.

“For the first time, we’re integrating our wearable sensing technology with an autonomous mobile robot,” said Zanotto, who worked with elderly people at Columbia University Medical Center for three years before coming to Stevens in 2016. “It’s exciting to be combining these different areas of expertise to leverage the strong points of wearable sensing technology, such as accurately capturing human movement, with the advantages of mobile robotics, such as much larger computational powers.”

The team is developing algorithms that fuse real-time data from smart, unobtrusive, in-shoe sensors and advanced on-board sensors to inform the robot’s navigation protocols and control the way the robot interacts with elderly individuals. It’s a promising way to assist seniors in safely doing walking exercises and maintaining their quality of life.

Bringing the benefits of the lab to life

Guo and Zanotto are working with Lytle, an expert in social and health psychology, to implement a social connectivity capability and make the bi-directional interaction between human and robot even more intuitive, engaging, and meaningful for seniors.

“Especially during COVID, it’s important for elderly people living on their own to connect socially with family and friends,” Zanotto said, “and the robot companion will also offer teleconferencing tools to provide that interaction in an intuitive and transparent way.”

“We want to use the robot for social connectedness, perhaps integrating it with a conversation agent such as Alexa,” Guo added. “The goal is to make it a companion robot that can sense, for example, that you are cooking, or you’re in the living room, and help with things you would do there.”

It’s a powerful example of how abstract concepts can have meaningful real-life benefits.

“As engineers, we tend to work in the lab, trying to optimize our algorithms and devices and technologies,” Zanotto noted, “but at the end of the day, what we do has limited value unless it has impact on real life. It’s fascinating to see how the devices and technologies we’re developing in the lab can be applied to make a difference for real people.”

Maintaining balance in a global pandemic

Although COVID-19 has delayed the planned testing at a senior center in New York City, it has not stopped the team’s progress.

“Although we can’t test on elderly populations yet, our students are still testing in the lab,” Guo said. “This summer and fall, for the first time, the students validated the system’s real-time ability to monitor and assess the dynamic margin of stability during walking—in other words, to evaluate whether the person following the robot is walking normally or has a risk of falling. They’re also designing parameters for the robot to give early warnings and feedback that help the human subjects correct posture and gait issues while walking.”

Those warnings would be literally underfoot, as the in-shoe sensors would pulse like a vibrating cell phone to deliver immediate directional information to the subject.

“We’re not the first to use this vibrotactile stimuli technology, but this application is new,” Zanotto said.

So far, the team has published papers in top robotics publication venues including IEEE Transactions on Neural Systems and Rehabilitation Engineering and the 2020 IEEE International Conference on Robotics and Automation (ICRA). It’s a big step toward realizing the synergies of bringing the technical expertise of engineers to bear on the clinical focus on biometrics—and the real lives of seniors everywhere. Continue reading

Posted in Human Robots

#437809 Q&A: The Masterminds Behind ...

Illustration: iStockphoto

Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.

The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.

Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.

Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.

This interview has been condensed and edited for clarity.

IEEE Spectrum: How does AI handle the various parts of the self-driving problem?

Photo: Toyota

Gill Pratt

Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.

The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.

Spectrum: Can you offset the weakness in prediction with stupendous perception?

Photo: Toyota Research Institute for Burgard

Wolfram Burgard

Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.

With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.

Spectrum: When do deep learning’s limitations become apparent?

Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.

Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.

“I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur?”
—Gill Pratt, Toyota Research Institute

For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.

You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.

Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?

Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.

Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?

Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.

Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions.

Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.

Spectrum: So, what’s next—what new technique is in the offing?

Pratt: If I knew the answer, we’d do it. [Laughter]

Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?

Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.

“We are now in the age of deep learning, and we don’t know what will come after.”
—Wolfram Burgard, Toyota Research Institute

That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.

Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?

Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.

Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.

Photo: Toyota

Toyota is using this
Platform 4 automated driving test vehicle, based on the Lexus LS, to develop Level-4 self-driving capabilities for its “Chauffeur” project.

Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!

Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?

These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?

Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?

Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.

Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.

And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.

Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.

Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done. Continue reading

Posted in Human Robots

#437783 Ex-Googler’s Startup Comes Out of ...

Over the last 10 years, the PR2 has helped roboticists make an enormous amount of progress in mobile manipulation over a relatively short time. I mean, it’s been a decade already, but still—robots are hard, and giving a bunch of smart people access to a capable platform where they didn’t have to worry about hardware and could instead focus on doing interesting and useful things helped to establish a precedent for robotics research going forward.

Unfortunately, not everyone can afford an enormous US $400,000 robot, and even if they could, PR2s are getting very close to the end of their lives. There are other mobile manipulators out there taking the place of the PR2, but so far, size and cost have largely restricted them to research labs. Lots of good research is being done, but it’s getting to the point where folks want to take the next step: making mobile manipulators real-world useful.

Today, a company called Hello Robot is announcing a new mobile manipulator called the Stretch RE1. With offices in the San Francisco Bay Area and in Atlanta, Ga., Hello Robot is led by Aaron Edsinger and Charlie Kemp, and by combining decades of experience in industry and academia they’ve managed to come up with a robot that’s small, lightweight, capable, and affordable, all at the same time. For now, it’s a research platform, but eventually, its creators hope that it will be able to come into our homes and take care of us when we need it to.

A fresh look at mobile manipulators
To understand the concept behind Stretch, it’s worth taking a brief look back at what Edsinger and Kemp have been up to for the past 10 years. Edsinger co-founded Meka Robotics in 2007, which built expensive, high performance humanoid arms, torsos, and heads for the research market. Meka was notable for being the first robotics company (as far as we know) to sell robot arms that used series elastic actuators, and the company worked extensively with Georgia Tech researchers. In 2011, Edsinger was one of the co-founders of Redwood Robotics (along with folks from SRI and Willow Garage), which was going to develop some kind of secret and amazing new robot arm before Google swallowed it in late 2013. At the same time, Google also acquired Meka and a bunch of other robotics companies, and Edsinger ended up at Google as one of the directors of its robotics program, until he left to co-found Hello Robot in 2017.

Meanwhile, since 2007 Kemp has been a robotics professor at Georgia Tech, where he runs the Healthcare Robotics Lab. Kemp’s lab was one of the 11 PR2 beta sites, giving him early experience with a ginormous mobile manipulator. Much of the research that Kemp has spent the last decade on involves robots providing assistance to untrained users, often through direct physical contact, and frequently either in their own homes or in a home environment. We should mention that the Georgia Tech PR2 is still going, most recently doing some clever material classification work in a paper for IROS later this year.

Photo: Hello Robot

Hello Robot co-founder and CEO Aaron Edsinger says that, although Stretch is currently a research platform, he hopes to see the robot deployed in home environments, adding that the “impact we want to have is through robots that are helpful to people in society.”

So with all that in mind, where’d Hello Robot come from? As it turns out, both Edsinger and Kemp were in Rodney Brooks’ group at MIT, so it’s perhaps not surprising that they share some of the same philosophies about what robots should be and what they should be used for. After collaborating on a variety of projects over the years, in 2017 Edsinger was thinking about his next step after Google when Kemp stopped by to show off some video of a new robot prototype that he’d been working on—the prototype for Stretch. “As soon as I saw it, I knew that was exactly the kind of thing I wanted to be working on,” Edsinger told us. “I’d become frustrated with the complexity of the robots being built to do manipulation in home environments and around people, and it solved a lot of problems in an elegant way.”

For Kemp, Stretch is an attempt to get everything he’s been teaching his robots out of his lab at Georgia Tech and into the world where it can actually be helpful to people. “Right from the beginning, we were trying to take our robots out to real homes and interact with real people,” says Kemp. Georgia Tech’s PR2, for example, worked extensively with Henry and Jane Evans, helping Henry (a quadriplegic) regain some of the bodily autonomy he had lost. With the assistance of the PR2, Henry was able to keep himself comfortable for hours without needing a human caregiver to be constantly with him. “I felt like I was making a commitment in some ways to some of the people I was working with,” Kemp told us. “But 10 years later, I was like, where are these things? I found that incredibly frustrating. Stretch is an effort to try to push things forward.”

A robot you can put in the backseat of a car
One way to put Stretch in context is to think of it almost as a reaction to the kitchen sink philosophy of the PR2. Where the PR2 was designed to be all the robot anyone could ever need (plus plenty of robot that nobody really needed) embodied in a piece of hardware that weighs 225 kilograms and cost nearly half a million dollars, Stretch is completely focused on being just the robot that is actually necessary in a form factor that’s both much smaller and affordable. The entire robot weighs a mere 23 kg in a footprint that’s just a 34 cm square. As you can see from the video, it’s small enough (and safe enough) that it can be moved by a child. The cost? At $17,950 apiece—or a bit less if you buy a bunch at once—Stretch costs a fraction of what other mobile manipulators sell for.

It might not seem like size or weight should be that big of an issue, but it very much is, explains Maya Cakmak, a robotics professor at the University of Washington, in Seattle. Cakmak worked with PR2 and Henry Evans when she was at Willow Garage, and currently has access to both a PR2 and a Fetch research robot. “When I think about my long term research vision, I want to deploy service robots in real homes,” Cakmak told us. Unfortunately, it’s the robots themselves that have been preventing her from doing this—both the Fetch and the PR2 are large enough that moving them anywhere requires a truck and a lift, which also limits the home that they can be used in. “For me, I felt immediately that Stretch is very different, and it makes a lot of sense,” she says. “It’s safe and lightweight, you can probably put it in the backseat of a car.” For Cakmak, Stretch’s size is the difference between being able to easily take a robot to the places she wants to do research in, and not. And cost is a factor as well, since a cheaper robot means more access for her students. “I got my refurbished PR2 for $180,000,” Cakmak says. “For that, with Stretch I could have 10!”

“I felt immediately that Stretch is very different. It’s safe and lightweight, you can probably put it in the backseat of a car. I got my refurbished PR2 for $180,000. For that, with Stretch I could have 10!”
—Maya Cakmak, University of Washington

Of course, a portable robot doesn’t do you any good if the robot itself isn’t sophisticated enough to do what you need it to do. Stretch is certainly a compromise in functionality in the interest of small size and low cost, but it’s a compromise that’s been carefully thought out, based on the experience that Edsinger has building robots and the experience that Kemp has operating robots in homes. For example, most mobile manipulators are essentially multi-degrees-of-freedom arms on mobile bases. Stretch instead leverages its wheeled base to move its arm in the horizontal plane, which (most of the time) works just as well as an extra DoF or two on the arm while saving substantially on weight and cost. Similarly, Stretch relies almost entirely on one sensor, an Intel RealSense D435i on a pan-tilt head that gives it a huge range of motion. The RealSense serves as a navigation camera, manipulation camera, a 3D mapping system, and more. It’s not going to be quite as good for a task that might involve fine manipulation, but most of the time it’s totally workable and you’re saving on cost and complexity.

Stretch has been relentlessly optimized to be the absolutely minimum robot to do mobile manipulation in a home or workplace environment. In practice, this meant figuring out exactly what it was absolutely necessary for Stretch to be able to do. With an emphasis on manipulation, that meant defining the workspace of the robot, or what areas it’s able to usefully reach. “That was one thing we really had to push hard on,” says Edsinger. “Reachability.” He explains that reachability and a small mobile base tend not to go together, because robot arms (which tend to weigh a lot) can cause a small base to tip, especially if they’re moving while holding a payload. At the same time, Stretch needed to be able to access both countertops and the floor, while being able to reach out far enough to hand people things without having to be right next to them. To come up with something that could meet all those requirements, Edsinger and Kemp set out to reinvent the robot arm.

Stretch’s key innovation: a stretchable arm
The design they came up with is rather ingenious in its simplicity and how well it works. Edsinger explains that the arm consists of five telescoping links: one fixed and four moving. They are constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole. The strong, lightweight structure allows the arm to extend over half a meter and hold up to 1.5 kg. Although the company has a patent pending for the design, Edsinger declined to say whether the links are driven by a belt, cables, or gears. “We don’t want to disclose too much of the secret sauce [with regard to] the drive mechanism.” He added that the arm was “one of the most significant engineering challenges on the robot in terms of getting the desired reach, compactness, precision, smoothness, force sensitivity, and low cost to all happily coexist.”

Photo: Hello Robot

Stretch’s arm consists of five telescoping links constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole, minimizing weight and inertia. The arm has a reach of over half a meter and can hold up to 1.5 kg.

Another interesting features of Stretch is its interface with the world—its gripper. There are countless different gripper designs out there, each and every one of which is the best at gripping some particular subset of things. But making a generalized gripper for all of the stuff that you’d find in a home is exceptionally difficult. Ideally, you’d want some sort of massive experimental test program where thousands and thousands of people test out different gripper designs in their homes for long periods of time and then tell you which ones work best. Obviously, that’s impractical for a robotics startup, but Kemp realized that someone else was already running the study for him: Amazon.

“I had this idea that there are these assistive grabbers that people with disabilities use to grasp objects in the real world,” he told us. Kemp went on Amazon’s website and looked at the top 10 grabbers and the reviews from thousands of users. He then bought a bunch of different ones and started testing them. “This one [Stretch’s gripper], I almost didn’t order it, it was such a weird looking thing,” he says. “But it had great reviews on Amazon, and oh my gosh, it just blew away the other grabbers. And I was like, that’s it. It just works.”

Stretch’s teleoperated and autonomous capabilities
As with any robot intended to be useful outside of a structured environment, hardware is only part of the story, and arguably not even the most important part. In order for Stretch to be able to operate out from under the supervision of a skilled roboticist, it has to be either easy to control, or autonomous. Ideally, it’s both, and that’s what Hello Robot is working towards, although things didn’t start out that way, Kemp explains. “From a minimalist standpoint, we began with the notion that this would be a teleoperated robot. But in the end, you just don’t get the real power of the robot that way, because you’re tied to a person doing stuff. As much as we fought it, autonomy really is a big part of the future for this kind of system.”

Here’s a look at some of Stretch’s teleoperated capabilities. We’re told that Stretch is very easy to get going right out of the box, although this teleoperation video from Hello Robot looks like it’s got a skilled and experienced user in the loop:

For such a low-cost platform, the autonomy (even at this early stage) is particularly impressive:

Since it’s not entirely clear from the video exactly what’s autonomous, here’s a brief summary of a couple of the more complex behaviors that Kemp sent us:

Object grasping: Stretch uses its 3D camera to find the nearest flat surface using a virtual overhead view. It then segments significant blobs on top of the surface. It selects the largest blob in this virtual overhead view and fits an ellipse to it. It then generates a grasp plan that makes use of the center of the ellipse and the major and minor axes. Once it has a plan, Stretch orients its gripper, moves to the pre-grasp pose, moves to the grasp pose, closes its gripper based on the estimated object width, lifts up, and retracts.
Mapping, navigating, and reaching to a 3D point: These demonstrations all use FUNMAP (Fast Unified Navigation, Manipulation and Planning). It’s all novel custom Python code. Even a single head scan performed by panning the 3D camera around can result in a very nice 3D representation of Stretch’s surroundings that includes the nearby floor. This is surprisingly unusual for robots, which often have their cameras too low to see many interesting things in a human environment. While mapping, Stretch selects where to scan next in a non-trivial way that considers factors such as the quality of previous observations, expected new observations, and navigation distance. The plan that Stretch uses to reach the target 3D point has been optimized for navigation and manipulation. For example, it finds a final robot pose that provides a large manipulation workspace for Stretch, which must consider nearby obstacles, including obstacles on the ground.
Object handover: This is a simple demonstration of object handovers. Stretch performs Cartesian motions to move its gripper to a body-relative position using a good motion heuristic, which is to extend the arm as the last step. These simple motions work well due to the design of Stretch. It still surprises me how well it moves the object to comfortable places near my body, and how unobtrusive it is. The goal point is specified relative to a 3D frame attached to the person’s mouth estimated using deep learning models (shown in the RViz visualization video). Specifically, Stretch targets handoff at a 3D point that is 20 cm below the estimated position of the mouth and 25 cm away along the direction of reaching.

Much of these autonomous capabilities come directly from Kemp’s lab, and the demo code is available for anyone to use. (Hello Robot says all of Stretch’s software is open source.)

Photo: Hello Robot

Hello Robot co-founder and CEO Aaron Edsinger says Stretch is designed to work with people in homes and workplaces and can be teleoperated to do a variety of tasks, including picking up toys, removing laundry from a dryer, and playing games with kids.

As of right now, Stretch is very much a research platform. You’re going to see it in research labs doing research things, and hopefully in homes and commercial spaces as well, but still under the supervision of professional roboticists. As you may have guessed, though, Hello Robot’s vision is a bit broader than that. “The impact we want to have is through robots that are helpful to people in society,” Edsinger says. “We think primarily in the home context, but it could be in healthcare, or in other places. But we really want to have our robots be impactful, and useful. To us, useful is exciting.” Adds Kemp: “I have a personal bias, but we’d really like this technology to benefit older adults and caregivers. Rather than creating a specialized assistive device, we want to eventually create an inexpensive consumer device for everyone that does lots of things.”

Neither Edsinger nor Kemp would say much more on this for now, and they were very explicit about why—they’re being deliberately cautious about raising expectations, having seen what’s happened to some other robotics companies over the past few years. Without VC funding (Hello Robot is currently bootstrapping itself into existence), Stretch is being sold entirely on its own merits. So far, it seems to be working. Stretch robots are already in a half dozen research labs, and we expect that with today’s announcement, we’ll start seeing them much more frequently.

This article appears in the October 2020 print issue as “A Robot That Keeps It Simple.” Continue reading

Posted in Human Robots

#437769 Q&A: Facebook’s CTO Is at War With ...

Photo: Patricia de Melo Moreira/AFP/Getty Images

Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.

Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.

In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.

In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.

By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).

In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.

Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.

In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.

Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.

Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.

Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.

However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.

Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.

This interview has been edited and condensed for clarity.

IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?

Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.

The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.

There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.

My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.

Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?

Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.

Spectrum: How is that training done, and how did computer-vision models come to Facebook?

Image: Facebook

Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.

Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.

Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.

Spectrum: Do your AI systems work equally well on all types of prohibited content?

Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.

Spectrum: How much progress have you made on hate speech?

Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.

Image: Facebook

Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.

Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.

Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.

[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.

Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?

Image: Facebook

Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.

Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.

It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.

Spectrum: How is Facebook applying its AI tools to the problem of election interference?

Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.

On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.

Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.

There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.

I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.

Spectrum: What cutting-edge AI tools and methods have you been working on lately?

Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.

Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.

Spectrum: What else are you excited about?

Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”

We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.

To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.

When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.

Spectrum: How do you move new AI tools from your research labs into operational use?

Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.

Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?

Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.

As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.

We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.

Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading

Posted in Human Robots