Tag Archives: practical
#438731 Video Friday: Perseverance Lands on Mars
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.
Hmm, did anything interesting happen in robotics yesterday week?
Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:
[ Mars 2020 ]
PLEN hopes you had a happy Valentine's Day!
[ PLEN ]
Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.
[ Unitree ]
Thanks Xingxing!
Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.
[ Spy in the Wild ]
Tails, it turns out, are useful for almost everything.
[ DART Lab ]
Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.
[ Extend Robotics ]
Fotokite is a safe, practical way to do local surveillance with a drone.
I just wish they still had a consumer version 🙁
[ Fotokite ]
How to confuse fish.
[ Harvard ]
Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.
[ Army ]
Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!
[ Flexiv ]
Thanks Yunfan!
An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.
[ Paper ]
Thanks Poramate!
A short film about Robot Wars from Blender Magazine in 1995.
[ YouTube ]
While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.
[ ETHZ ]
Thanks Markus!
Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area
[ IFRR ]
Thanks Fan!
Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.
[ IFRR ]
At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.
[ UPenn ] Continue reading →
#437940 How Boston Dynamics Taught Its Robots to ...
A week ago, Boston Dynamics posted a video of Atlas, Spot, and Handle dancing to “Do You Love Me.” It was, according to the video description, a way “to celebrate the start of what we hope will be a happier year.” As of today the video has been viewed nearly 24 million times, and the popularity is no surprise, considering the compelling mix of technical prowess and creativity on display.
Strictly speaking, the stuff going on in the video isn’t groundbreaking, in the sense that we’re not seeing any of the robots demonstrate fundamentally new capabilities, but that shouldn’t take away from how impressive it is—you’re seeing state-of-the-art in humanoid robotics, quadrupedal robotics, and whatever-the-heck-Handle-is robotics.
What is unique about this video from Boston Dynamics is the artistic component. We know that Atlas can do some practical tasks, and we know it can do some gymnastics and some parkour, but dancing is certainly something new. To learn more about what it took to make these dancing robots happen (and it’s much more complicated than it might seem), we spoke with Aaron Saunders, Boston Dynamics’ VP of Engineering.
Saunders started at Boston Dynamics in 2003, meaning that he’s been a fundamental part of a huge number of Boston Dynamics’ robots, even the ones you may have forgotten about. Remember LittleDog, for example? A team of two designed and built that adorable little quadruped, and Saunders was one of them.
While he’s been part of the Atlas project since the beginning (and had a hand in just about everything else that Boston Dynamics works on), Saunders has spent the last few years leading the Atlas team specifically, and he was kind enough to answer our questions about their dancing robots.
IEEE Spectrum: What’s your sense of how the Internet has been reacting to the video?
Aaron Saunders: We have different expectations for the videos that we make; this one was definitely anchored in fun for us. The response on YouTube was record-setting for us: We received hundreds of emails and calls with people expressing their enthusiasm, and also sharing their ideas for what we should do next, what about this song, what about this dance move, so that was really fun. My favorite reaction was one that I got from my 94-year-old grandma, who watched the video on YouTube and then sent a message through the family asking if I’d taught the robot those sweet moves. I think this video connected with a broader audience, because it mixed the old-school music with new technology.
We haven’t seen Atlas move like this before—can you talk about how you made it happen?
We started by working with dancers and a choreographer to create an initial concept for the dance by composing and assembling a routine. One of the challenges, and probably the core challenge for Atlas in particular, was adjusting human dance moves so that they could be performed on the robot. To do that, we used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go “that would be easy” or “that would be hard” or “that scares me.” And then we’d have a discussion, try different things in simulation, and make adjustments to find a compatible set of moves that we could execute on Atlas.
Throughout the project, the time frame for creating those new dance moves got shorter and shorter as we built tools, and as an example, eventually we were able to use that toolchain to create one of Atlas’ ballet moves in just one day, the day before we filmed, and it worked. So it’s not hand-scripted or hand-coded, it’s about having a pipeline that lets you take a diverse set of motions, that you can describe through a variety of different inputs, and push them through and onto the robot.
Image: Boston Dynamics
Were there some things that were particularly difficult to translate from human dancers to Atlas? Or, things that Atlas could do better than humans?
Some of the spinning turns in the ballet parts took more iterations to get to work, because they were the furthest from leaping and running and some of the other things that we have more experience with, so they challenged both the machine and the software in new ways. We definitely learned not to underestimate how flexible and strong dancers are—when you take elite athletes and you try to do what they do but with a robot, it’s a hard problem. It’s humbling. Fundamentally, I don’t think that Atlas has the range of motion or power that these athletes do, although we continue developing our robots towards that, because we believe that in order to broadly deploy these kinds of robots commercially, and eventually in a home, we think they need to have this level of performance.
One thing that robots are really good at is doing something over and over again the exact same way. So once we dialed in what we wanted to do, the robots could just do it again and again as we played with different camera angles.
I can understand how you could use human dancers to help you put together a routine with Atlas, but how did that work with Spot, and particularly with Handle?
I think the people we worked with actually had a lot of talent for thinking about motion, and thinking about how to express themselves through motion. And our robots do motion really well—they’re dynamic, they’re exciting, they balance. So I think what we found was that the dancers connected with the way the robots moved, and then shaped that into a story, and it didn’t matter whether there were two legs or four legs. When you don’t necessarily have a template of animal motion or human behavior, you just have to think a little harder about how to go about doing something, and that’s true for more pragmatic commercial behaviors as well.
“We used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go ‘that would be easy’ or ‘that would be hard’ or ‘that scares me.’”
—Aaron Saunders, Boston Dynamics
How does the experience that you get teaching robots to dance, or to do gymnastics or parkour, inform your approach to robotics for commercial applications?
We think that the skills inherent in dance and parkour, like agility, balance, and perception, are fundamental to a wide variety of robot applications. Maybe more importantly, finding that intersection between building a new robot capability and having fun has been Boston Dynamics’ recipe for robotics—it’s a great way to advance.
One good example is how when you push limits by asking your robots to do these dynamic motions over a period of several days, you learn a lot about the robustness of your hardware. Spot, through its productization, has become incredibly robust, and required almost no maintenance—it could just dance all day long once you taught it to. And the reason it’s so robust today is because of all those lessons we learned from previous things that may have just seemed weird and fun. You’ve got to go into uncharted territory to even know what you don’t know.
Image: Boston Dynamics
It’s often hard to tell from watching videos like these how much time it took to make things work the way you wanted them to, and how representative they are of the actual capabilities of the robots. Can you talk about that?
Let me try to answer in the context of this video, but I think the same is true for all of the videos that we post. We work hard to make something, and once it works, it works. For Atlas, most of the robot control existed from our previous work, like the work that we’ve done on parkour, which sent us down a path of using model predictive controllers that account for dynamics and balance. We used those to run on the robot a set of dance steps that we’d designed offline with the dancers and choreographer. So, a lot of time, months, we spent thinking about the dance and composing the motions and iterating in simulation.
Dancing required a lot of strength and speed, so we even upgraded some of Atlas’ hardware to give it more power. Dance might be the highest power thing we’ve done to date—even though you might think parkour looks way more explosive, the amount of motion and speed that you have in dance is incredible. That also took a lot of time over the course of months; creating the capability in the machine to go along with the capability in the algorithms.
Once we had the final sequence that you see in the video, we only filmed for two days. Much of that time was spent figuring out how to move the camera through a scene with a bunch of robots in it to capture one continuous two-minute shot, and while we ran and filmed the dance routine multiple times, we could repeat it quite reliably. There was no cutting or splicing in that opening two-minute shot.
There were definitely some failures in the hardware that required maintenance, and our robots stumbled and fell down sometimes. These behaviors are not meant to be productized and to be a 100 percent reliable, but they’re definitely repeatable. We try to be honest with showing things that we can do, not a snippet of something that we did once. I think there’s an honesty required in saying that you’ve achieved something, and that’s definitely important for us.
You mentioned that Spot is now robust enough to dance all day. How about Atlas? If you kept on replacing its batteries, could it dance all day, too?
Atlas, as a machine, is still, you know… there are only a handful of them in the world, they’re complicated, and reliability was not a main focus. We would definitely break the robot from time to time. But the robustness of the hardware, in the context of what we were trying to do, was really great. And without that robustness, we wouldn’t have been able to make the video at all. I think Atlas is a little more like a helicopter, where there’s a higher ratio between the time you spend doing maintenance and the time you spend operating. Whereas with Spot, the expectation is that it’s more like a car, where you can run it for a long time before you have to touch it.
When you’re teaching Atlas to do new things, is it using any kind of machine learning? And if not, why not?
As a company, we’ve explored a lot of things, but Atlas is not using a learning controller right now. I expect that a day will come when we will. Atlas’ current dance performance uses a mixture of what we like to call reflexive control, which is a combination of reacting to forces, online and offline trajectory optimization, and model predictive control. We leverage these techniques because they’re a reliable way of unlocking really high performance stuff, and we understand how to wield these tools really well. We haven’t found the end of the road in terms of what we can do with them.
We plan on using learning to extend and build on the foundation of software and hardware that we’ve developed, but I think that we, along with the community, are still trying to figure out where the right places to apply these tools are. I think you’ll see that as part of our natural progression.
Image: Boston Dynamics
Much of Atlas’ dynamic motion comes from its lower body at the moment, but parkour makes use of upper body strength and agility as well, and we’ve seen some recent concept images showing Atlas doing vaults and pullups. Can you tell us more?
Humans and animals do amazing things using their legs, but they do even more amazing things when they use their whole bodies. I think parkour provides a fantastic framework that allows us to progress towards whole body mobility. Walking and running was just the start of that journey. We’re progressing through more complex dynamic behaviors like jumping and spinning, that’s what we’ve been working on for the last couple of years. And the next step is to explore how using arms to push and pull on the world could extend that agility.
One of the missions that I’ve given to the Atlas team is to start working on leveraging the arms as much as we leverage the legs to enhance and extend our mobility, and I’m really excited about what we’re going to be working on over the next couple of years, because it’s going to open up a lot more opportunities for us to do exciting stuff with Atlas.
What’s your perspective on hydraulic versus electric actuators for highly dynamic robots?
Across my career at Boston Dynamics, I’ve felt passionately connected to so many different types of technology, but I’ve settled into a place where I really don’t think this is an either-or conversation anymore. I think the selection of actuator technology really depends on the size of the robot that you’re building, what you want that robot to do, where you want it to go, and many other factors. Ultimately, it’s good to have both kinds of actuators in your toolbox, and I love having access to both—and we’ve used both with great success to make really impressive dynamic machines.
I think the only delineation between hydraulic and electric actuators that appears to be distinct for me is probably in scale. It’s really challenging to make tiny hydraulic things because the industry just doesn’t do a lot of that, and the reciprocal is that the industry also doesn’t tend to make massive electrical things. So, you may find that to be a natural division between these two technologies.
Besides what you’re working on at Boston Dynamics, what recent robotics research are you most excited about?
For us as a company, we really love to follow advances in sensing, computer vision, terrain perception, these are all things where the better they get, the more we can do. For me personally, one of the things I like to follow is manipulation research, and in particular manipulation research that advances our understanding of complex, friction-based interactions like sliding and pushing, or moving compliant things like ropes.
We’re seeing a shift from just pinching things, lifting them, moving them, and dropping them, to much more meaningful interactions with the environment. Research in that type of manipulation I think is going to unlock the potential for mobile manipulators, and I think it’s really going to open up the ability for robots to interact with the world in a rich way.
Is there anything else you’d like people to take away from this video?
For me personally, and I think it’s because I spend so much of my time immersed in robotics and have a deep appreciation for what a robot is and what its capabilities and limitations are, one of my strong desires is for more people to spend more time with robots. We see a lot of opinions and ideas from people looking at our videos on YouTube, and it seems to me that if more people had opportunities to think about and learn about and spend time with robots, that new level of understanding could help them imagine new ways in which robots could be useful in our daily lives. I think the possibilities are really exciting, and I just want more people to be able to take that journey. Continue reading →
#437824 Video Friday: These Giant Robots Are ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.
“Who doesn’t love giant robots?”
Luma, is a towering 8 metre snail which transforms spaces with its otherworldly presence. Another piece, Triffid, stands at 6 metres and its flexible end sweeps high over audiences’ heads like an enchanted plant. The movement of the creatures is inspired by the flexible, wiggling and contorting motions of the animal kingdom and is designed to provoke instinctive reactions and emotions from the people that meet them. Air Giants is a new creative robotic studio founded in 2020. They are based in Bristol, UK, and comprise a small team of artists, roboticists and software engineers. The studio is passionate about creating emotionally effective motion at a scale which is thought-provoking and transporting, as well as expanding the notion of what large robots can be used for.
Here’s a behind the scenes and more on how the creatures work.
[ Air Giants ]
Thanks Emma!
If the idea of submerging a very expensive sensor payload being submerged in a lake makes you as uncomfortable as it makes me, this is not the video for you.
[ ANYbotics ]
As the pandemic continues on, the measures due to this health crisis are increasingly stringent, and working from home continues to be promoted and solicited by many companies, Pepper will allow you to keep in touch with your relatives or even your colleagues.
[ Softbank ]
Fairly impressive footwork from Tencent Robotics.
Although, LittleDog was doing that like a decade ago:
[ Tencent ]
It's been long enough since I've been able to go out for boba tea that a robotic boba tea kiosk seems like a reasonable thing to get for my living room.
[ Bobacino ] via [ Gizmodo ]
Road construction and maintenance is challenging and dangerous work. Pioneer Industrial Systems has spent over twenty years designing custom robotic systems for industrial manufacturers around the world. These robotic systems greatly improve safety and increase efficiency. Now they’re taking that expertise on the road, with the Robotic Maintenance Vehicle. This base unit can be mounted on a truck or trailer, and utilizes various modules to perform a variety of road maintenance tasks.
[ Pioneer ]
Extend Robotics arm uses cloud-based teleoperation software, featuring human-like dexterity and intelligence, with multiple applications in healthcare, utilities and energy
[ Extend Robotics ]
ARC, short for “AI, Robot, Cloud,” includes the latest algorithms and high precision data required for human-robot coexistence. Now with ultra-low latency networks, many robots can simultaneously become smarter, just by connecting to ARC. “ARC Eye” serves as the eyes for all robots, accurately determining the current location and route even indoors where there is no GPS access. “ARC Brain” is the computing system shared simultaneously by all robots, which plans and processes movement, localization, and task performance for the robot.
[ Naver Labs ]
How can we re-imagine urban infrastructures with cutting-edge technologies? Listen to this webinar from Ger Baron, Amsterdam’s CTO, and Senseable City Lab’s researchers, on how MIT and Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute) are reimagining Amsterdam’s canals with the first fleet of autonomous boats.
[ MIT ]
Join Guy Burroughes in this webinar recording to hear about Spot, the robot dog created by Boston Dynamics, and how RACE plan to use it in nuclear decommissioning and beyond.
[ UKAEA ]
This GRASP on Robotics seminar comes from Marco Pavone at Stanford University, “On Safe and Efficient Human-robot interactions via Multimodal Intent Modeling and Reachability-based Safety Assurance.”
In this talk I will present a decision-making and control stack for human-robot interactions by using autonomous driving as a motivating example. Specifically, I will first discuss a data-driven approach for learning multimodal interaction dynamics between robot-driven and human-driven vehicles based on recent advances in deep generative modeling. Then, I will discuss how to incorporate such a learned interaction model into a real-time, interaction-aware decision-making framework. The framework is designed to be minimally interventional; in particular, by leveraging backward reachability analysis, it ensures safety even when other cars defy the robot's expectations without unduly sacrificing performance. I will present recent results from experiments on a full-scale steer-by-wire platform, validating the framework and providing practical insights. I will conclude the talk by providing an overview of related efforts from my group on infusing safety assurances in robot autonomy stacks equipped with learning-based components, with an emphasis on adding structure within robot learning via control-theoretical and formal methods.
[ UPenn ]
Autonomous Systems Failures: Who is Legally and Morally Responsible? Sponsored by Northwestern University’s Law and Technology Initiative and AI@NU, the event was moderated by Dan Linna and included Northwestern Engineering's Todd Murphey, University of Washington Law Professor Ryan Calo, and Google Senior Research Scientist Madeleine Clare Elish.
[ Northwestern ] Continue reading →
#437769 Q&A: Facebook’s CTO Is at War With ...
Photo: Patricia de Melo Moreira/AFP/Getty Images
Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.
Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.
In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.
In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.
By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).
In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.
Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.
In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.
Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.
Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.
Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.
However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.
Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.
This interview has been edited and condensed for clarity.
IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?
Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.
The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.
There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.
My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.
Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?
Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.
Spectrum: How is that training done, and how did computer-vision models come to Facebook?
Image: Facebook
Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.
Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.
Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.
Spectrum: Do your AI systems work equally well on all types of prohibited content?
Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.
Spectrum: How much progress have you made on hate speech?
Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.
Image: Facebook
Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.
Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.
Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.
[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.
Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?
Image: Facebook
Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.
Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.
It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.
Spectrum: How is Facebook applying its AI tools to the problem of election interference?
Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.
On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.
Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.
There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.
I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.
Spectrum: What cutting-edge AI tools and methods have you been working on lately?
Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.
Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.
Spectrum: What else are you excited about?
Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”
We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.
To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.
When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.
Spectrum: How do you move new AI tools from your research labs into operational use?
Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.
Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?
Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.
As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.
We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.
Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading →
#437749 Video Friday: NASA Launches Its Most ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
AWS Cloud Robotics Summit – August 18-19, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Virtual Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.
Yesterday was a big day for what was quite possibly the most expensive robot on Earth up until it wasn’t on Earth anymore.
Perseverance and the Ingenuity helicopter are expected to arrive on Mars early next year.
[ JPL ]
ICYMI, our most popular post this week featured Northeastern University roboticist John Peter Whitney literally putting his neck on the line for science! He was testing a remotely operated straight razor shaving robotic system powered by fluidic actuators. The cutting-edge (sorry!) device transmits forces from a primary stage, operated by a barber, to a secondary stage, with the razor attached.
[ John Peter Whitney ]
Together with Boston Dynamics, Ford is introducing a pilot program into our Van Dyke Transmission Plant. Say hello to Fluffy the Robot Dog, who creates fast and accurate 3D scans that helps Ford engineers when we’re retooling our plants.
Not shown in the video: “At times, Fluffy sits on its robotic haunches and rides on the back of a small, round Autonomous Mobile Robot, known informally as Scouter. Scouter glides smoothly up and down the aisles of the plant, allowing Fluffy to conserve battery power until it’s time to get to work. Scouter can autonomously navigate facilities while scanning and capturing 3-D point clouds to generate a CAD of the facility. If an area is too tight for Scouter, Fluffy comes to the rescue.”
[ Ford ]
There is a thing that happens at 0:28 in this video that I have questions about.
[ Ghost Robotics ]
Pepper is far more polite about touching than most humans.
[ Paper ]
We don’t usually post pure simulation videos unless they give us something to get really, really excited about. So here’s a pure simulation video.
[ Hybrid Robotics ]
University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.
[ DRSL ]
HMI is making beastly electric arms work underwater, even if they’re not stapled to a robotic submarine.
[ HMI ]
Here’s some interesting work in progress from MIT’s Biomimetics Robotics Lab. The limb is acting as a “virtual magnet” using a bimodal force and direction sensor.
Thanks Peter!
[ MIT Biomimetics Lab ]
This is adorable but as a former rabbit custodian I can assure you that approximately 3 seconds after this video ended, all of the wires on that robot were chewed to bits.
[ Lingkang Zhang ]
During the ARCHE 2020 integration week, TNO and the ETH Robot System Lab (RSL) collaborated to integrate their research and development process using the Articulated Locomotion and MAnipulation (ALMA) robot. Next to the integration of software, we tested software to confirm proper implementation and development. We also captured visual and auditory data for future software development. This all resulted in the creation of multiple demo’s to show the capabilities of the teleoperation framework using the ALMA robot.
[ RSL ]
When we talk about practical applications quadrupedal robots with foot wheels, we don’t usually think about them on this scale, although we should.
[ RSL ]
Juan wrote in to share a DIY quadruped that he’s been working on, named CHAMP.
Juan says that the demo robot can be built in less than US $1000 with easily accessible parts. “I hope that my project can provide a more accessible platform for students, researchers, and enthusiasts who are interested to learn more about quadrupedal robot development and its underlying technology.”
[ CHAMP ]
Thanks Juan!
Here’s a New Zealand TV report about a study on robot abuse from Christoph Bartneck at the University of Canterbury.
[ Paper ]
Our Robotics Studio is a hands on class exposing students to practical aspects of the design, fabrication, and programming of physical robotic systems. So what happens when the class goes virtual due to the covid-19 virus? Things get physical — all @ home.
[ Columbia ]
A few videos from the Supernumerary Robotic Devices Workshop, held online earlier this month.
“Handheld Robots: Bridging the Gap between Fully External and Wearable Robots,” presented by Walterio Mayol-Cuevas, University of Bristol.
“Playing the Piano with 11 Fingers: The Neurobehavioural Constraints of Human Robot Augmentation,” presented by Aldo Faisal, Imperial College London.
[ Workshop ] Continue reading →