Tag Archives: change

#437882 Video Friday: MIT Mini-Cheetah Robots ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online Conference]
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
Let us know if you have suggestions for next week, and enjoy today's videos.

What a lovely Christmas video from Norlab.

[ Norlab ]

Thanks Francois!

MIT Mini-Cheetahs are looking for a new home. Our new cheetah cubs, born at NAVER LABS, are for the MIT Mini-Cheetah workshop. MIT professor Sangbae Kim and his research team are supporting joint research by distributing Mini-Cheetahs to researchers all around the world.

[ NAVER Labs ]

For several years, NVIDIA’s research teams have been working to leverage GPU technology to accelerate reinforcement learning (RL). As a result of this promising research, NVIDIA is pleased to announce a preview release of Isaac Gym – NVIDIA’s physics simulation environment for reinforcement learning research. RL-based training is now more accessible as tasks that once required thousands of CPU cores can now instead be trained using a single GPU.

[ NVIDIA ]

At SINTEF in Norway, they're working on ways of using robots to keep tabs on giant floating cages of tasty fish:

One of the tricky things about operating robots in an environment like this is localization, so SINTEF is working on a solution that uses beacons:

While that video shows a lot of simulation (because otherwise there are tons of fish in the way), we're told that the autonomous navigation has been successfully demonstrated with an ROV in “a full scale fish farm with up to 200.000 salmon swimming around the robot.”

[ SINTEF ]

Thanks Eleni!

We’ve been getting ready for the snow in the most BG way possible. Wishing all of you a happy and healthy holiday season.

[ Berkshire Grey ]

ANYbotics doesn’t care what time of the year it is, so Happy Easter!

And here's a little bit about why ANYmal C looks the way it does.

[ ANYbotics ]

Robert “Buz” Chmielewski is using two modular prosthetic limbs developed by APL to feed himself dessert. Smart software puts his utensils in roughly the right spot, and then Buz uses his brain signals to cut the food with knife and fork. Once he is done cutting, the software then brings the food near his mouth, where he again uses brain signals to bring the food the last several inches to his mouth so that he can eat it.

[ JHUAPL ]

Introducing VESPER: a new military-grade small drone that is designed, sourced and built in the United States. Vesper offers a 50-minutes flight time, with speeds up to 45 mph (72 kph) and a total flight range of 25 miles (45 km). The magnetic snap-together architecture enables extremely fast transitions: the battery, props and rotor set can each be swapped in <5 seconds.

[ Vantage Robotics ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale Faboratory ]

Get a preview of the cave environments that are being used to inspire the Final Event competition course of the DARPA Subterranean Challenge. In the Final Event, teams will deploy their robots to rapidly map, navigate, and search in competition courses that combine elements of man-made tunnel systems, urban underground, and natural cave networks!

The reason to pay attention this particular video is that it gives us some idea of what DARPA means when they say "cave."

[ SubT ]

MQ25 takes another step toward unmanned aerial refueling for the U.S. Navy. The MQ-25 test asset has flown for the first time with an aerial refueling pod containing the hose and basket that will make it an aerial refueler.

[ Boeing ]

We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy trained in simulation over a wide range of procedurally generated terrains.

[ DRS ]

The video shows the results of the German research project RoPHa. Within the project, the partners developed technologies for two application scenarios with the service robot Care-O-bot 4 in order to support people in need of help when eating.

[ RoPHa Project ]

Thanks Jenny!

This looks like it would be fun, if you are a crazy person.

[ Team BlackSheep ]

Robot accuracy is the limiting factor in many industrial applications. Manufacturers often only specify the pose repeatability values of their robotic systems. Fraunhofer IPA has set up a testing environment for automated measuring of accuracy performance criteria of industrial robots. Following the procedures defined in norm ISO 9283 allows generating reliable and repeatable results. They can be the basis for targeted measures increasing the robotic system’s accuracy.

[ Fraunhofer ]

Thanks Jenny!

The IEEE Women in Engineering – Robotics and Automation Society (WIE-RAS) hosted an online panel on best practices for teaching robotics. The diverse panel boasts experts in robotics education from a variety of disciplines, institutions, and areas of expertise.

[ IEEE RAS ]

Northwestern researchers have developed a first-of-its-kind soft, aquatic robot that is powered by light and rotating magnetic fields. These life-like robotic materials could someday be used as "smart" microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures.

[ Northwestern ]

Tech United Eindhoven's soccer robots now have eight wheels instead of four wheels, making them tweleve times better, if my math is right.

[ TU Eindhoven ] Continue reading

Posted in Human Robots

#437800 Malleable Structure Makes Robot Arm More ...

The majority of robot arms are built out of some combination of long straight tubes and actuated joints. This isn’t surprising, since our limbs are built the same way, which was a clever and efficient bit of design. By adding more tubes and joints (or degrees of freedom), you can increase the versatility of your robot arm, but the tradeoff is that complexity, weight, and cost will increase, too.

At ICRA, researchers from Imperial College London’s REDS Lab, headed by Nicolas Rojas, introduced a design for a robot that’s built around a malleable structure rather than a rigid one, allowing you to improve how versatile the arm is without having to add extra degrees of freedom. The idea is that you’re no longer constrained to static tubes and joints but can instead reconfigure your robot to set it up exactly the way you want and easily change it whenever you feel like.

Inside of that bendable section of arm are layers and layers of mylar sheets, cut into flaps and stacked on top of one another so that each flap is overlapping or overlapped by at least 11 other flaps. The mylar is slippery enough that under most circumstances, the flaps can move smoothly against each other, letting you adjust the shape of the arm. The flaps are sealed up between latex membranes, and when air is pumped out from between the membranes, they press down on each other and turn the whole structure rigid, locking itself in whatever shape you’ve put it in.

Image: Imperial College London

The malleable part of the robot consists of layers of mylar sheets, cut into flaps that can move smoothly against each other, letting you adjust the shape of the arm. The flaps are sealed up between latex membranes, and when air is pumped out from between the membranes, they press down on each other and turn the whole structure rigid, locking itself in whatever shape you’ve put it in.

The nice thing about this system is that it’s a sort of combination of a soft robot and a rigid robot—you get the flexibility (both physical and metaphorical) of a soft system, without necessarily having to deal with all of the control problems. It’s more mechanically complex than either (as hybrid systems tend to be), but you save on cost, size, and weight, and reduce the number of actuators you need, which tend to be points of failure. You do need to deal with creating and maintaining a vacuum, and the fact that the malleable arm is not totally rigid, but depending on your application, those tradeoffs could easily be worth it.

For more details, we spoke with first author Angus B. Clark via email.

IEEE Spectrum: Where did this idea come from?

Angus Clark: The idea of malleable robots came from the realization that the majority of serial robot arms have 6 or more degrees of freedom (DoF)—usually rotary joints—yet are typically performing tasks that only require 2 or 3 DoF. The idea of a robot arm that achieves flexibility and adaptation to tasks but maintains the simplicity of a low DoF system, along with the rapid development of variable stiffness continuum robots for medical applications, inspired us to develop the malleable robot concept.

What are some ways in which a malleable robot arm could provide unique advantages, and what are some potential applications that could leverage these advantages?

Malleable robots have the ability to complete multiple traditional tasks, such as pick and place or bin picking operations, without the added bulk of extra joints that are not directly used within each task, as the flexibility of the robot arm is provided by ​a malleable link instead. This results in an overall smaller form factor, including weight and footprint of the robot, as well as a lower power requirement and cost of the robot as fewer joints are needed, without sacrificing adaptability. This makes the robot ideal for scenarios where any of these factors are critical, such as in space robotics—where every kilogram saved is vital—or in rehabilitation robotics, where cost reduction may facilitate adoption, to name two examples. Moreover, the collaborative soft-robot-esque nature of malleable robots also tends towards collaborative robots in factories working safely alongside and with humans.

“The idea of malleable robots came from the realization that the majority of serial robot arms have 6 or more degrees of freedom (DoF), yet are typically performing tasks that only require 2 or 3 DoF”
—Angus B. Clark, Imperial College London

Compared to a conventional rigid link between joints, what are the disadvantages of using a malleable link?

Currently the maximum stiffness of a malleable link is considerably weaker than that of an equivalent solid steel rigid link, and this is one of the key areas we are focusing research on improving as motion precision and accuracy are impacted. We have created the largest existing variable stiffness link at roughly 800 mm length and 50 mm diameter, which suits malleable robots towards small and medium size workspaces. Our current results evaluating this accuracy are good, however achieving a uniform stiffness across the entire malleable link can be problematic due to the production of wrinkles under bending in the encapsulating membrane. As demonstrated by our SCARA topology results, this can produce slight structural variations resulting in reduced accuracy.

Does the robot have any way of knowing its own shape? Potentially, could this system reconfigure itself somehow?

Currently we compute the robot topology using motion tracking, with markers placed on the joints of the robot. Using distance geometry, we are then able to obtain the forward and inverse kinematics of the robot, of which we can use to control the end effector (the gripper) of the robot. Ideally, in the future we would love to develop a system that no longer requires the use of motion tracking cameras.

As for the robot reconfiguring itself, which we call an “intrinsic malleable link,” there are many methods that have been demonstrated for controlling a continuum structure, such as using positive pressure or via tendon wires, however the ability to in real-time determine the curvature of the link, not just the joint positions, is a significant hurdle to solve. However, we hope to see future development on malleable robots work towards solving this problem.

What are you working on next?

For us, refining the kinematics of the robot to enable a robust and complete system for allowing a user to collaboratively reshape the robot, while still achieving the accuracy expected from robotic systems, is our current main goal. Malleable robots are a brand new field we have introduced, and as such provide many opportunities for development and optimization. Over the coming years, we hope to see other researchers work alongside us to solve these problems.

“Design and Workspace Characterization of Malleable Robots,” by Angus B. Clark and Nicolas Rojas from Imperial College London, was presented at ICRA 2020.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437791 Is the Pandemic Spurring a Robot ...

“Are robots really destined to take over restaurant kitchens?” This was the headline of an article published by Eater four years ago. One of the experts interviewed was Siddhartha Srinivasa, at the time professor of the Robotics Institute at Carnegie Mellon University and currently director of Robotics and AI for Amazon. He said, “I’d love to make robots unsexy. It’s weird to say this, but when something becomes unsexy, it means that it works so well that you don’t have to think about it. You don’t stare at your dishwasher as it washes your dishes in fascination, because you know it’s gonna work every time… I want to get robots to that stage of reliability.”

Have we managed to get there over the last four years? Are robots unsexy yet? And how has the pandemic changed the trajectory of automation across industries?

The Covid Effect
The pandemic has had a massive economic impact all over the world, and one of the problems faced by many companies has been keeping their businesses running without putting employees at risk of infection. Many organizations are seeking to remain operational in the short term by automating tasks that would otherwise be carried out by humans. According to Digital Trends, since the start of the pandemic we have seen a significant increase in automation efforts in manufacturing, meat packing, grocery stores and more. In a June survey, 44 percent of corporate financial officers said they were considering more automation in response to coronavirus.

MIT economist David Autor described the economic crisis and the Covid-19 pandemic as “an event that forces automation.” But he added that Covid-19 created a kind of disruption that has forced automation in sectors and activities with a shortage of workers, while at the same time there has been no reduction in demand. This hasn’t taken place in hospitality, where demand has practically disappeared, but it is still present in agriculture and distribution. The latter is being altered by the rapid growth of e-commerce, with more efficient and automated warehouses that can provide better service.

China Leads the Way
China is currently in a unique position to lead the world’s automation economy. Although the country boasts a huge workforce, labor costs have multiplied by 10 over the past 20 years. As the world’s factory, China has a strong incentive to automate its manufacturing sector, which enjoys a solid leadership in high quality products. China is currently the largest and fastest-growing market in the world for industrial robotics, with a 21 percent increase up to $5.4 billion in 2019. This represents one third of global sales. As a result, Chinese companies are developing a significant advantage in terms of learning to work with metallic colleagues.

The reasons behind this Asian dominance are evident: the population has a greater capacity and need for tech adoption. A large percentage of the population will soon be of retirement age, without an equivalent younger demographic to replace it, leading to a pressing need to adopt automation in the short term.

China is well ahead of other countries in restaurant automation. As reported in Bloomberg, in early 2020 UBS Group AG conducted a survey of over 13,000 consumers in different countries and found that 64 percent of Chinese participants had ordered meals through their phones at least once a week, compared to a mere 17 percent in the US. As digital ordering gains ground, robot waiters and chefs are likely not far behind. The West harbors a mistrust towards non-humans that the East does not.

The Robot Evolution
The pandemic was a perfect excuse for robots to replace us. But despite the hype around this idea, robots have mostly disappointed during the pandemic.

Just over 66 different kinds of “social” robots have been piloted in hospitals, health centers, airports, office buildings, and other public and private spaces in response to the pandemic, according to a study from researchers at Pompeu Fabra University (Barcelona, Spain). Their survey looked at 195 robot deployments across 35 countries including China, the US, Thailand, and Hong Kong.

But if the “robot revolution” is a movement in which automation, robotics, and artificial intelligence proliferate through the value chain of various industries, bringing a paradigm shift in how we produce, consume, and distribute products—it hasn’t happened yet.

But there’s a more nuanced answer: rather than a revolution, we’re seeing an incremental robot evolution. It’s a trend that will likely accelerate over the next five years, particularly when 5G takes center stage and robotics as a field leaves behind imitation and evolves independently.

Automation Anxiety
Why don’t we finally welcome the long-promised robotic takeover? Despite progress in AI and increased adoption of industrial robots, consumer-facing robotic products are not nearly as ubiquitous as popular culture predicted decades ago. As Amara’s Law says: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” It seems we are living through the Gartner hype cycle.

People have a complicated relationship with robots, torn between admiring them, fearing them, rejecting them, and even boycotting them, as has happened in the automobile industry.

Retail robot in a Walmart store. Credit: Bossa Nova Robotics
Walmart terminated its contract with Bossa Nova and withdrew its 1,000 inventory robots from its stores because the company was concerned about how shoppers were reacting to seeing the six-foot robots in the aisles.

With road blocks like this, will the World Economic Forum’s prediction of almost half of tasks being carried out by machines by 2025 come to pass?

At the rate we’re going, it seems unlikely, even with the boost in automation caused by the pandemic. Robotics will continue to advance its capabilities, and will take over more human jobs as it does so, but it’s unlikely we’ll hit a dramatic inflection point that could be described as a “revolution.” Instead, the robot evolution will happen the way most societal change does: incrementally, with time for people to adapt both practically and psychologically.

For now though, robots are still pretty sexy.

Image Credit: charles taylor / Shutterstock.com Continue reading

Posted in Human Robots

#437789 Video Friday: Robotic Glove Features ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

Evidently, the folks at Unitree were paying attention to last week’s Video Friday.

[ Unitree ]

RoboSoft 2020 was a virtual conference this year (along with everything else), but they still held a soft robots contest, and here are four short vids—you can watch the rest of them here.

[ RoboSoft 2020 ]

If you were wondering why SoftBank bought Aldebaran Robotics and Boston Dynamics, here’s the answer.

I am now a Hawks fan. GO HAWKS!

[ Softbank Hawks ] via [ RobotStart ]

Scientists at the University of Liverpool have developed a fully autonomous mobile robot to assist them in their research. Using a type of AI, the robot has been designed to work uninterrupted for weeks at a time, allowing it to analyse data and make decisions on what to do next. Using a flexible arm with customised gripper it can be calibrated to interact with most standard lab equipment and machinery as well as navigate safely around human co-workers and obstacles.

[ Nature ]

Oregon State’s Cassie has been on break for a couple of months, but it’s back in the lab and moving alarmingly quickly.

[ DRL ]

The current situation linked to COVID-19 sadly led to the postponing of this year RoboCup 2020 at Bordeaux. As an official sponsor of The RoboCup, SoftBank Robotics wanted to take this opportunity to thank all RoboCupers and The RoboCup Federation for their support these past 13 years. We invite you to take a look at NAO’s adventure at The RoboCup as the official robot of the Standard Platform League. See you in Bordeaux 2021!

[ RoboCup 2021 ]

Miniature SAW robot crawling inside the intestines of a pig. You’re welcome.

[ Zarrouk Lab ]

The video demonstrates fast autonomous flight experiments in cluttered unknown environments, with the support of a robust and perception-aware replanning framework called RAPTOR. The associated paper is submitted to TRO.

[ HKUST ]

Since we haven’t gotten autonomy quite right yet, there’s a lot of telepresence going on for robots that operate in public spaces. Usually, you’ve got one remote human managing multiple robots, so it would be nice to make that interface a little more friendly, right?

[ HCI Lab ]

Arguable whether or not this is a robot, but it’s cool enough to spend a minute watching.

[ Ishikawa Lab ]

Communication is critical to collaboration; however, too much of it can degrade performance. Motivated by the need for effective use of a robot’s communication modalities, in this work, we present a computational framework that decides if, when, and what to communicate during human-robot collaboration.

[ Interactive Robotics ]

Robotiq has released the next generation of the grippers for collaborative robots: the 2F-85 and 2F-140. Both models gain greater robustness, safety, and customizability while retaining the same key benefits that have inspired thousands of manufacturers to choose them since their launch 6 years ago.

[ Robotiq ]

ANYmal C, the autonomous legged robot designed for industrial challenging environments, provides the mobility, autonomy and inspection intelligence to enable safe and efficient inspection operations. In this virtual showcase, discover how ANYmal climbs stairs, recovers from a fall, performs an autonomous mission and avoids obstacles, docks to charge by itself, digitizes analogue sensors and monitors the environment.

[ ANYbotics ]

At Waymo, we are committed to addressing inequality, and we believe listening is a critical first step toward driving positive change. Earlier this year, five Waymonauts sat down to share their thoughts on equity at work, challenging the status quo, and more. This is what they had to say.

[ Waymo ]

Nice of ABB to take in old robots and upgrade them to turn them into new robots again. Robots forever!

[ ABB ]

It’s nice seeing the progress being made by GITAI, one of the teams competing in the ANA Avatar XPRIZE Challenge, and also meet the humans behind the robots.

[ GITAI ] via [ XPRIZE ]

One more talk from the ICRA Legged Robotics Workshop: Jingyu Liu from DeepRobotics and Qiuguo Zhu from Zhejiang University.

[ Deep Robotics ] Continue reading

Posted in Human Robots

#437769 Q&A: Facebook’s CTO Is at War With ...

Photo: Patricia de Melo Moreira/AFP/Getty Images

Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.

Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.

In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.

In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.

By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).

In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.

Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.

In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.

Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.

Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.

Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.

However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.

Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.

This interview has been edited and condensed for clarity.

IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?

Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.

The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.

There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.

My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.

Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?

Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.

Spectrum: How is that training done, and how did computer-vision models come to Facebook?

Image: Facebook

Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.

Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.

Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.

Spectrum: Do your AI systems work equally well on all types of prohibited content?

Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.

Spectrum: How much progress have you made on hate speech?

Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.

Image: Facebook

Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.

Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.

Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.

[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.

Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?

Image: Facebook

Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.

Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.

It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.

Spectrum: How is Facebook applying its AI tools to the problem of election interference?

Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.

On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.

Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.

There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.

I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.

Spectrum: What cutting-edge AI tools and methods have you been working on lately?

Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.

Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.

Spectrum: What else are you excited about?

Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”

We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.

To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.

When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.

Spectrum: How do you move new AI tools from your research labs into operational use?

Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.

Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?

Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.

As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.

We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.

Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading

Posted in Human Robots