Tag Archives: version

#437859 We Can Do Better Than Human-Like Hands ...

One strategy for designing robots that are capable in anthropomorphic environments is to make the robots themselves as anthropomorphic as possible. It makes sense—for example, there are stairs all over the place because humans have legs, and legs are good at stairs, so if we give robots legs like humans, they’ll be good at stairs too, right? We also see this tendency when it comes to robotic grippers, because robots need to grip things that have been optimized for human hands.

Despite some amazing robotic hands inspired by the biology of our own human hands, there are also opportunities for creativity in gripper designs that do things human hands are not physically capable of. At ICRA 2020, researchers from Stanford University presented a paper on the design of a robotic hand that has fingers made of actuated rollers, allowing it to manipulate objects in ways that would tie your fingers into knots.

While it’s got a couple fingers, this prototype “roller grasper” hand tosses anthropomorphic design out the window in favor of unique methods of in-hand manipulation. The roller grasper does share some features with other grippers designed for in-hand manipulation using active surfaces (like conveyor belts embedded in fingers), but what’s new and exciting here is that those articulated active roller fingertips (or whatever non-anthropomorphic name you want to give them) provide active surfaces that are steerable. This means that the hand can grasp objects and rotate them without having to resort to complex sequences of finger repositioning, which is how humans do it.

Photo: Stanford University

Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers.

Each of the hand’s fingers has three actuated degrees of freedom, which result in several different ways in which objects can be grasped and manipulated. Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers. The motion of an object in this gripper isn’t quite holonomic, meaning that it can’t arbitrarily reorient things without sometimes going through other intermediate steps. And it’s also not compliant in the way that many other grippers are, limiting some types of grasps. This particular design probably won’t replace every gripper out there, but it’s particularly skilled at some specific kinds of manipulations in a way that makes it unique.

We should be clear that it’s not the intent of this paper (or of this article!) to belittle five-fingered robotic hands—the point is that there are lots of things that you can do with totally different hand designs, and just because humans use one kind of hand doesn’t mean that robots need to do the same if they want to match (or exceed) some specific human capabilities. If we could make robotic hands with five fingers that had all of the actuation and sensing and control that our own hands do, that would be amazing, but it’s probably decades away. In the meantime, there are plenty of different designs to explore.

And speaking of exploring different designs, these same folks are already at work on version two of their hand, which replaces the fingertip rollers with fingertip balls:

For more on this new version of the hand (among other things), we spoke with lead author Shenli Yuan via email. And the ICRA page is here if you have questions of your own.

IEEE Spectrum: Human hands are often seen as the standard for manipulation. When adding degrees of freedom that human hands don’t have (as in your work) can make robotic hands more capable than ours in many ways, do you think we should still think of human hands as something to try and emulate?

Shenli Yuan: Yes, definitely. Not only because human hands have great manipulation capability, but because we’re constantly surrounded by objects that were designed and built specifically to be manipulated by the human hand. Anthropomorphic robot hands are still worth investigating, and still have a long way to go before they truly match the dexterity of a human hand. The design we came up with is an exploration of what unique capabilities may be achieved if we are not bound by the constraints of anthropomorphism, and what a biologically impossible mechanism may achieve in robotic manipulation. In addition, for lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much. The design constraints for robotics and biology have points in common (like mechanical wear, finite tendons stiffness) but also major differences (like continuous rotation for robots and less heat dissipation problems for humans).

“For lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much.”
—Shenli Yuan, Stanford University

What are some manipulation capabilities of human hands that are the most difficult to replicate with your system?

There are a few things that come to mind. It cannot perform a power grasp (using the whole hand for grasping as opposed to pinch grasp that uses only fingertips), which is something that can be easily done by human hands. It cannot move or rotate objects instantaneously in arbitrary directions or about arbitrary axes, though the human hand is somewhat limited in this respect as well. It also cannot perform gaiting. That being said, these limitations exist largely because this grasper only has 9 degrees of freedom, as opposed to the human hand which has more than 20. We don’t think of this grasper as a replacement for anthropomorphic hands, but rather as a way to provide unique capabilities without all of the complexity associated with a highly actuated, humanlike hand.

What’s the most surprising or impressive thing that your hand is able to do?

The most impressive feature is that it can rotate objects continuously, which is typically difficult or inefficient for humanlike robot hands. Something really surprising was that we put most of our energy into the design and analysis of the grasper, and the control strategy we implemented for demonstrations is very simple. This simple control strategy works surprisingly well with very little tuning or trial-and-error.

With this many degrees of freedom, how complicated is it to get the hand to do what you want it to do?

The number of degrees of freedom is actually not what makes controlling it difficult. Most of the difficulties we encountered were actually due to the rolling contact between the rollers and the object during manipulation. The rolling behavior can be viewed as constantly breaking and re-establishing contacts between the rollers and objects, this very dynamic behavior introduces uncertainties in controlling our grasper. Specifically, it was difficult estimating the velocity of each contact point with the object, which changes based on object and finger position, object shape (especially curvature), and slip/no slip.

What more can you tell us about Roller Grasper V2?

Roller Grasper V2 has spherical rollers, while the V1 has cylindrical rollers. We realized that cylindrical rollers are very good at manipulating objects when the rollers and the object form line contacts, but it can be unstable when the grasp geometry doesn’t allow for a line contact between each roller and the grasped object. Spherical rollers solve that problem by allowing predictable points of contact regardless of how a surface is oriented.

The parallelogram mechanism of Roller Grasper V1 makes the pivot axis offset a bit from the center of the roller, which made our control and analysis more challenging. The kinematics of the Roller Grasper V2 is simpler. The base joint intersects with the finger, which intersects with the pivot joint, and the pivot joint intersects with the roller joint. It’s symmetrical design and simpler kinematics make our control and analysis a lot more straightforward. Roller Grasper V2 also has a larger pivot range of 180 degrees, while V1 is limited to 90 degrees.

In terms of control, we implemented more sophisticated control strategies (including a hand-crafted control strategy and an imitation learning based strategy) for the grasper to perform autonomous in-hand manipulation.

“Design of a Roller-Based Dexterous Hand for Object Grasping and Within-Hand Manipulation,” by Shenli Yuan, Austin D. Epps, Jerome B. Nowak, and J. Kenneth Salisbury from Stanford University is being presented at ICRA 2020.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437851 Boston Dynamics’ Spot Robot Dog ...

Boston Dynamics has been fielding questions about when its robots are going to go on sale and how much they’ll cost for at least a dozen years now. I can say this with confidence, because that’s how long I’ve been a robotics journalist, and I’ve been pestering them about it the entire time. But it’s only relatively recently that the company started to make a concerted push away from developing robots exclusively for the likes of DARPA into platforms with more commercial potential, starting with a compact legged robot called Spot, first introduced in 2016.

Since then, we’ve been following closely as Spot has gone from a research platform to a product, and today, Boston Dynamics is announcing the final step in that process: commercial availability. You can now order a Spot Explorer Kit from the Boston Dynamics online store for US $74,500 (plus tax), shipping included, with delivery in 6 to 8 weeks. FINALLY!

Over the past 10 months or so, Boston Dynamics has leased Spot robots to carefully selected companies, research groups, and even a few individuals as part of their early adopter program—that’s where all of the clips in the video below came from. While there are over 100 Spots out in the world right now, getting one of them has required convincing Boston Dynamics up front that you knew more or less exactly what you wanted to do and how you wanted to do it. If you’re a big construction company or the Jet Propulsion Laboratory or Adam Savage, that’s all well and good, but for other folks who think that a Spot could be useful for them somehow and want to give it a shot, this new availability provides a fewer-strings attached opportunity to do some experimentation with the robot.

There’s a lot of cool stuff going on in that video, but we were told that the one thing that really stood out to the folks at Boston Dynamics was a 2-second clip that you can see on the left-hand side of the screen from 0:19 to 0:21. In it, Spot is somehow managing to walk across a spider web of rebar without getting tripped up, at faster than human speed. This isn’t something that Spot was specifically programmed to do, and in fact the Spot User Guide specifically identifies “rebar mesh” as an unsafe operating environment. But the robot just handles it, and that’s a big part of what makes Spot so useful—its ability to deal with (almost) whatever you can throw at it.

Before you get too excited, Boston Dynamics is fairly explicit that the current license for the robot is intended for commercial use, and the company specifically doesn’t want people to be just using it at home for fun. We know this because we asked (of course we asked), and they told us “we specifically don’t want people to just be using it at home for fun.” Drat. You can still buy one as an individual, but you have to promise that you’ll follow the terms of use and user guidelines, and it sounds like using a robot in your house might be the second-fastest way to invalidate your warranty:

SPOT IS AN AMAZING ROBOT, BUT IS NOT CERTIFIED SAFE FOR IN-HOME USE OR INTENDED FOR USE NEAR CHILDREN OR OTHERS WHO MAY NOT APPRECIATE THE HAZARDS ASSOCIATED WITH ITS OPERATION.

Not being able to get Spot to play with your kids may be disappointing, but for those of you with the sort of kids who are also students, the good news is that Boston Dynamics has carved out a niche for academic institutions, which can buy Spot at a discounted price. And if you want to buy a whole pack of Spots, there’s a bulk discount for Enterprise users as well.

What do you get for $74,500? All this!

Spot robot
Spot battery (2x)
Spot charger
Tablet controller and charger
Robot case for storage and transportation
FREE SHIPPING!

Photo: Boston Dynamics

The basic package includes the robot, two batteries, charger, a tablet controller, and a storage case.

You can view detailed specs here.

So is $75k a lot of money for a robot like Spot, or not all that much? We don’t have many useful points of comparison, partially because it’s not clear to what extent other pre-commercial quadrupedal robots (like ANYmal or Aliengo) share capabilities and features with Spot. For more perspective on Spot’s price tag, we spoke to Michael Perry, vice president of business development at Boston Dynamics.

IEEE Spectrum: Why is Spot so affordable?

Michael Perry: The main goal of selling the robot at this stage is to try to get it into the hands of as many application developers as possible, so that we can learn from the community what the biggest driver of value is for Spot. As a platform, unlocking the value of an ecosystem is our core focus right now.

Spectrum: Why is Spot so expensive?

Perry: Expensive is relative, but compared to the initial prototypes of Spot, we’ve been able to drop down the cost pretty significantly. One key thing has been designing it for robustness—we’ve put hundreds and hundreds of hours on the robot to make sure that it’s able to be successful when it falls, or when it has an electrostatic discharge. We’ve made sure that it’s able to perceive a wide variety of environments that are difficult for traditional vision-based sensors to handle. A lot of that engineering is baked into the core product so that you don’t have to worry about the mobility or robotic side of the equation, you can just focus on application development.

Photos: Boston Dynamics

Accessories for Spot include [clockwise from top left]: Spot GXP with additional ports for payload integration; Spot CAM with panorama camera and advanced comms; Spot CAM+ with pan-tilt-zoom camera for inspections; Spot EAP with lidar to enhance autonomy on large sites; Spot EAP+ with Spot CAM camera plus lidar; and Spot CORE for additional processing power.

The $75k that you’ll pay for the Spot Explorer Kit, it’s important to note, is just the base price for the robot. As with other things that fall into this price range (like a luxury car), there are all kinds of fun ways to drive that cost up with accessories, although for Spot, some of those accessories will be necessary for many (if not most) applications. For example, a couple of expansion ports to make it easier to install your own payloads on Spot will run you $1,275. An additional battery is $4,620. And if you want to really get some work done, the Enhanced Autonomy Package (with 360 cameras, lights, better comms, and a Velodyne VLP-16) will set you back an additional $34,570. If you were hoping for an arm, you’ll have to wait until the end of the year.

Each Spot also includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff” or “I tried to take my robot swimming.” For that sort of thing (user error) to be covered, you’ll need to upgrade to the $12,000 Spot CARE premium service plan to cover your robot for a year as long as you don’t subject it to willful abuse, which both of those examples I just gave probably qualify as.

While we’re on the subject of robot abuse, Boston Dynamics has very sensibly devoted a substantial amount of the Spot User Guide to help new users understand how they should not be using their robot, in order to “lessen the risk of serious injury, death, or robot and other property damage.” According to the guide, some things that could cause Spot to fall include holes, cliffs, slippery surfaces (like ice and wet grass), and cords. Spot’s sensors also get confused by “transparent, mirrored, or very bright obstacles,” and the guide specifically says Spot “may crash into glass doors and windows.” Also this: “Spot cannot predict trajectories of moving objects. Do not operate Spot around moving objects such as vehicles, children, or pets.”

We should emphasize that this is all totally reasonable, and while there are certainly a lot of things to be aware of, it’s frankly astonishing that these are the only things that Boston Dynamics explicitly warns users against. Obviously, not every potentially unsafe situation or thing is described above, but the point is that Boston Dynamics is willing to say to new users, “here’s your robot, go do stuff with it” without feeling the need to hold their hand the entire time.

There’s one more thing to be aware of before you decide to buy a Spot, which is the following:

“All orders will be subject to Boston Dynamics’ Terms and Conditions of Sale which require the beneficial use of its robots.”

Specifically, this appears to mean that you aren’t allowed to (or supposed to) use the robot in a way that could hurt living things, or “as a weapon, or to enable any weapon.” The conditions of sale also prohibit using the robot for “any illegal or ultra-hazardous purpose,” and there’s some stuff in there about it not being cool to use Spot for “nuclear, chemical, or biological weapons proliferation, or development of missile technology,” which seems weirdly specific.

“Once you make a technology more broadly available, the story of it starts slipping out of your hands. Our hope is that ahead of time we’re able to clearly articulate the beneficial uses of the robot in environments where we think the robot has a high potential to reduce the risk to people, rather than potentially causing harm.”
—Michael Perry, Boston Dynamics

I’m very glad that Boston Dynamics is being so upfront about requiring that Spot is used beneficially. However, it does put the company in a somewhat challenging position now that these robots are being sold. Boston Dynamics can (and will) perform some amount of due-diligence before shipping a Spot, but ultimately, once the robots are in someone else’s hands, there’s only so much that BD can do.

Spectrum: Why is beneficial use important to Boston Dynamics?

Perry: One of the key things that we’ve highlighted many times in our license and terms of use is that we don’t want to see the robot being used in any way that inflicts physical harm on people or animals. There are philosophical reasons for that—I think all of us don’t want to see our technology used in a way that would hurt people. But also from a business perspective, robots are really terrible at conveying intention. In order for the robot to be helpful long-term, it has to be trusted as a piece of technology. So rather than looking at a robot and wondering, “is this something that could potentially hurt me,” we want people to think “this is a robot that’s here to help me.” To the extent that people associate Boston Dynamics with cutting edge robots, we think that this is an important stance for the rollout of our first commercial product. If we find out that somebody’s violated our terms of use, their warranty is invalidated, we won’t repair their product, and we have a licensing timeout that would prevent them from accessing their robot after that timeout has expired. It’s a remediation path, but we do think that it’s important to at least provide that as something that helps enforce our position on use of our technology.

It’s very important to keep all of this in context: Spot is a tool. It’s got some autonomy and the appearance of agency, but it’s still just doing what people tell it to do, even if those things might be unsafe. If you read through the user guide, it’s clear how much of an effort Boston Dynamics is making to try to convey the importance of safety to Spot users—and ultimately, barring some unforeseen and catastrophic software or hardware issues, safety is about the users, rather than Boston Dynamics or Spot itself. I bring this up because as we start seeing more and more Spots doing things without Boston Dynamics watching over them quite so closely, accidents are likely inevitable. Spot might step on someone’s foot. It might knock someone over. If Spot was perfectly safe, it wouldn’t be useful, and we have to acknowledge that its impressive capabilities come with some risks, too.

Photo: Boston Dynamics

Each Spot includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff.”

Now that Spot is on the market for real, we’re excited to see who steps up and orders one. Depending on who the potential customer is, Spot could either seem like an impossibly sophisticated piece of technology that they’d never be able to use, or a magical way of solving all of their problems overnight. In reality, it’s of course neither of those things. For the former (folks with an idea but without a lot of robotics knowledge or experience), Spot does a lot out of the box, but BD is happy to talk with people and facilitate connections with partners who might be able to integrate specific software and hardware to get Spot to do a unique task. And for the latter (who may also be folks with an idea but without a lot of robotics knowledge or experience), BD’s Perry offers a reminder Spot is not Rosie the Robot, and would be equally happy to talk about what the technology is actually capable of doing.

Looking forward a bit, we asked Perry whether Spot’s capabilities mean that customers are starting to think beyond using robots to simply replace humans, and are instead looking at them as a way of enabling a completely different way of getting things done.

Spectrum: Do customers interested in Spot tend to think of it as a way of replacing humans at a specific task, or as a system that can do things that humans aren’t able to do?

Perry: There are what I imagine as three levels of people understanding the robot applications. Right now, we’re at level one, where you take a person out of this dangerous, dull job, and put a robot in. That’s the entry point. The second level is, using the robot, can we increase the production of that task? For example, take site documentation on a construction site—right now, people do 360 image capture of a site maybe once a week, and they might do a laser scan of the site once per project. At the second level, the question is, what if you were able to get that data collection every day, or multiple times a day? What kinds of benefits would that add to your process? To continue the construction example, the third level would be, how could we completely redesign this space now that we know that this type of automation is available? To take one example, there are some things that we cannot physically build because it’s too unsafe for people to be a part of that process, but if you were to apply robotics to that process, then you could potentially open up a huge envelope of design that has been inaccessible to people.

To order a Spot of your very own, visit shop.bostondynamics.com.

A version of this post appears in the August 2020 print issue as “$74,500 Will Fetch You a Spot.” Continue reading

Posted in Human Robots

#437769 Q&A: Facebook’s CTO Is at War With ...

Photo: Patricia de Melo Moreira/AFP/Getty Images

Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.

Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.

In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.

In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.

By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).

In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.

Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.

In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.

Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.

Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.

Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.

However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.

Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.

This interview has been edited and condensed for clarity.

IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?

Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.

The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.

There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.

My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.

Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?

Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.

Spectrum: How is that training done, and how did computer-vision models come to Facebook?

Image: Facebook

Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.

Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.

Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.

Spectrum: Do your AI systems work equally well on all types of prohibited content?

Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.

Spectrum: How much progress have you made on hate speech?

Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.

Image: Facebook

Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.

Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.

Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.

[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.

Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?

Image: Facebook

Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.

Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.

It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.

Spectrum: How is Facebook applying its AI tools to the problem of election interference?

Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.

On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.

Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.

There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.

I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.

Spectrum: What cutting-edge AI tools and methods have you been working on lately?

Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.

Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.

Spectrum: What else are you excited about?

Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”

We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.

To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.

When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.

Spectrum: How do you move new AI tools from your research labs into operational use?

Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.

Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?

Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.

As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.

We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.

Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading

Posted in Human Robots

#437753 iRobot’s New Education Robot Makes ...

iRobot has been on a major push into education robots recently. They acquired Root Robotics in 2019, and earlier this year, launched an online simulator and associated curriculum designed to work in tandem with physical Root robots. The original Root was intended to be a classroom robot, with one of its key features being the ability to stick to (and operate on) magnetic virtual surfaces, like whiteboards. And as a classroom robot, at $200, it’s relatively affordable, if you can buy one or two and have groups of kids share them.

For kids who are more focused on learning at home, though, $200 is a lot for a robot that doesn't even keep your floors clean. And as nice as it is to have a free simulator, any kid will tell you that it’s way cooler to have a real robot to mess around with. Today, iRobot is announcing a new version of Root that’s been redesigned for home use, with a $129 price that makes it significantly more accessible to folks outside of the classroom.

The Root rt0 is a second version of the Root robot—the more expensive, education-grade Root rt1 is still available. To bring the cost down, the rt0 is missing some features that you can still find in the rt1. Specifically, you don’t get the internal magnets to stick the robot to vertical surfaces, there are no cliff sensors, and you don’t get a color scanner or an eraser. But for home use, the internal magnets are probably not necessary anyway, and the rest of that stuff seems like a fair compromise for a cost reduction of 30 percent.

Photo: iRobot

One of the new accessories for the iRobot Root rt0 is a “Brick Top” that snaps onto the upper face the robot via magnets. The accessory can be used with LEGOs and other LEGO-compatible bricks, opening up an enormous amount of customization.

It’s not all just taking away, though. There’s also a new $20 accessory, a LEGO-ish “Brick Top” that snaps onto the upper face of Root (either version) via magnets. The plate can be used with LEGO bricks and other LEGO-compatible things. This opens up an enormous amount of customization, and it’s for more than just decoration, since Root rt0 has the ability to interact with whatever’s on top of it via its actuated marker. Root can move the marker up and down, the idea being that you can programmatically turn lines on and off. By replacing the marker with a plastic thingy that sticks up through the body of the robot, the marker up/down command can be used to actuate something on the brick top. In the video, that’s what triggers the catapult.

Photo: iRobot

By attaching a marker, you can program Root to draw. The robot has a motor that can move the marker up and down.

This less expensive version of Root still has access to the online simulator, as well as the multi-level coding interface that allows kids to seamlessly transition through multiple levels of coding complexity, from graphical to text. There’s a new Android app coming out today, and you can access everything through web-based apps on Chrome OS, Windows and macOS, as well as on iOS. iRobot tells us that they’ve also recently expanded their online learning library full of Root-based educational activities. In particular, they’ve added a new category on “Social Emotional Learning,” the goal of which is to help kids develop things like social awareness, self-management, decision making, and relationship skills. We’re not quite sure how you teach those things with a little hexagonal robot, but we like that iRobot is giving it a try.

Root coding robots are designed for kids age 6 and up, ships for free, and is available now.

[ iRobot Root ] Continue reading

Posted in Human Robots

#437741 CaseCrawler Adds Tiny Robotic Legs to ...

Most of us have a fairly rational expectation that if we put our cellphone down somewhere, it will stay in that place until we pick it up again. Normally, this is exactly what you’d want, but there are exceptions, like when you put your phone down in not quite the right spot on a wireless charging pad without noticing, or when you’re lying on the couch and your phone is juuust out of reach no matter how much you stretch.

Roboticists from the Biorobotics Laboratory at Seoul National University in South Korea have solved both of these problems, and many more besides, by developing a cellphone case with little robotic legs, endowing your phone with the ability to skitter around autonomously. And unlike most of the phone-robot hybrids we’ve seen in the past, this one actually does look like a legit case for your phone.

CaseCrawler is much chunkier than a form-fitting case, but it’s not offensively bigger than one of those chunky battery cases. It’s only 24 millimeters thick (excluding the motor housing), and the total weight is just under 82 grams. Keep in mind that this case is in fact an entire robot, and also not at all optimized for being an actual phone case, so it’s easy to imagine how it could get a lot more svelte—for example, it currently includes a small battery that would be unnecessary if it instead tapped into the phone for power.

The technology inside is pretty amazing, since it involves legs that can retract all the way flat while also supporting a significant amount of weight. The legs work sort of like your legs do, in that there’s a knee joint that can only bend one way. To move the robot forward, a linkage (attached to a motor through a gearbox) pushes the leg back against the ground, as the knee joint keeps the leg straight. On the return stroke, the joint allows the leg to fold, making it compliant so that it doesn’t exert force on the ground. The transmission that sends power from the gearbox to the legs is just 1.5-millimeter thick, but this incredibly thin and lightweight mechanical structure is quite powerful. A non-phone case version of the robot, weighing about 23 g, is able to crawl at 21 centimeters per second while carrying a payload of just over 300 g. That’s more than 13 times its body weight.

The researchers plan on exploring how robots like these could make other objects movable that would otherwise not be. They’d also like to add some autonomy, which (at least for the phone case version) could be as straightforward as leveraging the existing sensors on the phone. And as to when you might be able to buy one of these—we’ll keep you updated, but the good news is that it seems to be fundamentally inexpensive enough that it may actually crawl out of the lab one day.

“CaseCrawler: A Lightweight and Low-Profile Crawling Phone Case Robot,” by Jongeun Lee, Gwang-Pil Jung, Sang-Min Baek, Soo-Hwan Chae, Sojung Yim, Woongbae Kim, and Kyu-Jin Cho from Seoul National University, appears in the October issue of IEEE Robotics and Automation Letters.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots