Tag Archives: Team
#437769 Q&A: Facebook’s CTO Is at War With ...
Photo: Patricia de Melo Moreira/AFP/Getty Images
Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.
Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.
In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.
In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.
By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).
In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.
Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.
In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.
Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.
Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.
Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.
However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.
Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.
This interview has been edited and condensed for clarity.
IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?
Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.
The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.
There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.
My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.
Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?
Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.
Spectrum: How is that training done, and how did computer-vision models come to Facebook?
Image: Facebook
Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.
Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.
Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.
Spectrum: Do your AI systems work equally well on all types of prohibited content?
Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.
Spectrum: How much progress have you made on hate speech?
Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.
Image: Facebook
Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.
Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.
Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.
[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.
Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?
Image: Facebook
Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.
Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.
It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.
Spectrum: How is Facebook applying its AI tools to the problem of election interference?
Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.
On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.
Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.
There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.
I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.
Spectrum: What cutting-edge AI tools and methods have you been working on lately?
Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.
Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.
Spectrum: What else are you excited about?
Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”
We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.
To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.
When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.
Spectrum: How do you move new AI tools from your research labs into operational use?
Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.
Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?
Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.
As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.
We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.
Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading →
#437765 Video Friday: Massive Robot Joins ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.
Here are some professional circus artists messing around with an industrial robot for fun, like you do.
The acrobats are part of Östgötateatern, a Swedish theatre group, and the chair bit got turned into its own act, called “The Last Fish.” But apparently the Swedish Work Environment Authority didn’t like that an industrial robot—a large ABB robotic arm—was being used in an artistic performance, arguing that the same safety measures that apply in a factory setting would apply on stage. In other words, the robot had to operate inside a protective cage and humans could not physically interact with it.
When told that their robot had to be removed, the acrobats went to court. And won! At least that’s what we understand from this Swedish press release. The court in Linköping, in southern Sweden, ruled that the safety measures taken by the theater had been sufficient. The group had worked with a local robotics firm, Dyno Robotics, to program the manipulator and learn how to interact with it as safely as possible. The robot—which the acrobats say is the eighth member of their troupe—will now be allowed to return.
[ Östgötateatern ]
Houston Mechathronics’ Aquanaut continues to be awesome, even in the middle of a pandemic. It’s taken the big step (big swim?) out of NASA’s swimming pool and into open water.
[ HMI ]
Researchers from Carnegie Mellon University and Facebook AI Research have created a navigation system for robots powered by common sense. The technique uses machine learning to teach robots how to recognize objects and understand where they’re likely to be found in house. The result allows the machines to search more strategically.
[ CMU ]
Cassie manages 2.1 m/s, which is uncomfortably fast in a couple of different ways.
Next, untethered. After that, running!
[ Michigan Robotics ]
Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.
Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).
To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS ’18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called “Global-to-Local Safe Autonomy Synthesis,” or GLAS, which imitates a complete-information planner with only local information, and “Neural-Swarm,” a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.
[ Caltech ]
Fetch Robotics’ Freight robot is now hauling around pulsed xenon UV lamps to autonomously disinfect spaces with UV-A, UV-B, and UV-C, all at the same time.
[ SmartGuard UV ]
When you’re a vertically symmetrical quadruped robot, there is no upside-down.
[ Ghost Robotics ]
In the virtual world, the objects you pick up do not exist: you can see that cup or pen, but it does not feel like you’re touching them. That presented a challenge to EPFL professor Herbert Shea. Drawing on his extensive experience with silicone-based muscles and motors, Shea wanted to find a way to make virtual objects feel real. “With my team, we’ve created very small, thin and fast actuators,” explains Shea. “They are millimeter-sized capsules that use electrostatic energy to inflate and deflate.” The capsules have an outer insulating membrane made of silicone enclosing an inner pocket filled with oil. Each bubble is surrounded by four electrodes, that can close like a zipper. When a voltage is applied, the electrodes are pulled together, causing the center of the capsule to swell like a blister. It is an ingenious system because the capsules, known as HAXELs, can move not only up and down, but also side to side and around in a circle. “When they are placed under your fingers, it feels as though you are touching a range of different objects,” says Shea.
[ EPFL ]
Through the simple trick of reversing motors on impact, a quadrotor can land much more reliably on slopes.
[ Sherbrooke ]
Turtlebot delivers candy at Harvard.
I <3 Turtlebot SO MUCH
[ Harvard ]
Traditional drone controllers are a little bit counterintuitive, because there’s one stick that’s forwards and backwards and another stick that’s up and down but they’re both moving on the same axis. How does that make sense?! Here’s a remote that gives you actual z-axis control instead.
[ Fenics ]
Thanks Ashley!
Lio is a mobile robot platform with a multifunctional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis.
[ F&P Robotics ]
Video shows a ground vehicle autonomously exploring and mapping a multi-storage garage building and a connected patio on Carnegie Mellon University campus. The vehicle runs onboard state estimation and mapping leveraging range, vision, and inertial sensing, local planning for collision avoidance, and terrain analysis. All processing is real-time and no post-processing involved. The vehicle drives at 2m/s through the exploration run. This work is dedicated to DARPA Subterranean Challange.
[ CMU ]
Raytheon UK’s flagship STEM programme, the Quadcopter Challenge, gives 14-15 year olds the chance to participate in a hands-on, STEM-based engineering challenge to build a fully operational quadcopter. Each team is provided with an identical kit of parts, tools and instructions to build and customise their quadcopter, whilst Raytheon UK STEM Ambassadors provide mentoring, technical support and deliver bite-size learning modules to support the build.
[ Raytheon ]
A video on some of the research work that is being carried out at The Australian Centre for Field Robotics, University of Sydney.
[ University of Sydney ]
Jeannette Bohg, assistant professor of computer science at Stanford University, gave one of the Early Career Award Keynotes at RSS 2020.
[ RSS 2020 ]
Adam Savage remembers Grant Imahara.
[ Tested ] Continue reading →
#437745 Video Friday: Japan’s Giant Gundam ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
ICSR 2020 – November 14-16, 2020 – Golden, Co., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
It’s coming together—literally! Japan’s giant Gundam appears nearly finished and ready for its first steps. In a recent video, Gundam Factory Yokohama, which is constructing the 18-meter-tall, 25-ton walking robot, provided an update on the project. The video shows the Gundam getting its head attached—after being blessed by Shinto priests.
In the video update, they say the project is “steadily progressing” and further details will be announced around the end of September.
[ Gundam Factory Yokohama ]
Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this work develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions.
So that’s where Cassie’s eyes go.
[ Berkeley ]
Now that the DARPA SubT Cave Circuit is all virtual, here’s a good reminder of how it’ll work.
[ SubT ]
Since July 20, anyone 11+ years of age must wear a mask in closed public places in France. This measure also is highly recommended in many European, African and Persian Gulf countries. To support businesses and public places, SoftBank Robotics Europe unveils a new feature with Pepper: AI Face Mask Detection.
[ Softbank ]
University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.
[ University of Michigan ]
Suzumori Endo Lab, Tokyo Tech has created various types of IPMC robots. Those robots are fabricated by novel 3D fabrication methods.
[ Suzimori Endo Lab ]
The most explode-y of drones manages not to explode this time.
[ SpaceX ]
At Amazon, we’re constantly innovating to support our employees, customers, and communities as effectively as possible. As our fulfillment and delivery teams have been hard at work supplying customers with items during the pandemic, Amazon’s robotics team has been working behind the scenes to re-engineer bots and processes to increase safety in our fulfillment centers.
While some folks are able to do their jobs at home with just a laptop and internet connection, it’s not that simple for other employees at Amazon, including those who spend their days building and testing robots. Some engineers have turned their homes into R&D labs to continue building these new technologies to better serve our customers and employees. Their creativity and resourcefulness to keep our important programs going is inspiring.
[ Amazon ]
Australian Army soldiers from 2nd/14th Light Horse Regiment (Queensland Mounted Infantry) demonstrated the PD-100 Black Hornet Nano unmanned aircraft vehicle during a training exercise at Shoalwater Bay Training Area, Queensland, on 4 May 2018.
This robot has been around for a long time—maybe 10 years or more? It makes you wonder what the next generation will look like, and if they can manage to make it even smaller.
[ FLIR ]
Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
[ Paper ] via [ HKUST ]
Emys can help keep kindergarteners sitting still for a long time, which is not small feat!
[ Emys ]
Introducing the RoboMaster EP Core, an advanced educational robot that was built to take learning to the next level and provides an all-in-one solution for STEAM-based classrooms everywhere, offering AI and programming projects for students of all ages and experience levels.
[ DJI ]
This Dutch food company Heemskerk uses ABB robots to automate their order picking. Their new solution reduces the amount of time the fresh produce spends in the supply chain, extending its shelf life, minimizing wastage, and creating a more sustainable solution for the fresh food industry.
[ ABB ]
This week’s episode of Pass the Torque features NASA’s Satellite Servicing Projects Division (NExIS) Robotics Engineer, Zakiya Tomlinson.
[ NASA ]
Massachusetts has been challenging Silicon Valley as the robotics capital of the United States. They’re not winning, yet. But they’re catching up.
[ MassTech ]
San Francisco-based Formant is letting anyone remotely take its Spot robot for a walk. Watch The Robot Report editors, based in Boston, take Spot for a walk around Golden Gate Park.
You can apply for this experience through Formant at the link below.
[ Formant ] via [ TRR ]
Thanks Steve!
An Institute for Advanced Study Seminar on “Theoretical Machine Learning,” featuring Peter Stone from UT Austin.
For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Connections to theoretical advances in off-policy reinforcement learning will be highlighted throughout.
[ IAS ] Continue reading →
#437721 Video Friday: Child Robot Learning to ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
We first met Ibuki, Hiroshi Ishiguro’s latest humanoid robot, a couple of years ago. A recent video shows how Ishiguro and his team are teaching the robot to express its emotional state through gait and body posture while moving.
This paper presents a subjective evaluation of the emotions of a wheeled mobile humanoid robot expressing emotions during movement by replicating human gait-induced upper body motion. For this purpose, we proposed the robot equipped with a vertical oscillation mechanism that generates such motion by focusing on human center-of-mass trajectory. In the experiment, participants watched videos of the robot’s different emotional gait-induced upper body motions, and assess the type of emotion shown, and their confidence level in their answer.
[ Hiroshi Ishiguro Lab ] via [ RobotStart ]
ICYMI: This is a zinc-air battery made partly of Kevlar that can be used to support weight, not just add to it.
Like biological fat reserves store energy in animals, a new rechargeable zinc battery integrates into the structure of a robot to provide much more energy, a team led by the University of Michigan has shown.
The new battery works by passing hydroxide ions between a zinc electrode and the air side through an electrolyte membrane. That membrane is partly a network of aramid nanofibers—the carbon-based fibers found in Kevlar vests—and a new water-based polymer gel. The gel helps shuttle the hydroxide ions between the electrodes. Made with cheap, abundant and largely nontoxic materials, the battery is more environmentally friendly than those currently in use. The gel and aramid nanofibers will not catch fire if the battery is damaged, unlike the flammable electrolyte in lithium ion batteries. The aramid nanofibers could be upcycled from retired body armor.
[ University of Michigan ]
In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.
[ CMU ]
Captured on Aug. 11 during the second rehearsal of the OSIRIS-REx mission’s sample collection event, this series of images shows the SamCam imager’s field of view as the NASA spacecraft approaches asteroid Bennu’s surface. The rehearsal brought the spacecraft through the first three maneuvers of the sampling sequence to a point approximately 131 feet (40 meters) above the surface, after which the spacecraft performed a back-away burn.
These images were captured over a 13.5-minute period. The imaging sequence begins at approximately 420 feet (128 meters) above the surface – before the spacecraft executes the “Checkpoint” maneuver – and runs through to the “Matchpoint” maneuver, with the last image taken approximately 144 feet (44 meters) above the surface of Bennu.
[ NASA ]
The DARPA AlphaDogfight Trials Final Event took place yesterday; the livestream is like 5 hours long, but you can skip ahead to 4:39 ish to see the AI winner take on a human F-16 pilot in simulation.
Some things to keep in mind about the result: The AI had perfect situational knowledge while the human pilot had to use eyeballs, and in particular, the AI did very well at lining up its (virtual) gun with the human during fast passing maneuvers, which is the sort of thing that autonomous systems excel at but is not necessarily reflective of better strategy.
[ DARPA ]
Coming soon from Clearpath Robotics!
[ Clearpath ]
This video introduces Preferred Networks’ Hand type A, a tendon-driven robot gripper with passively switchable underactuated surface.
[ Preferred Networks ]
CYBATHLON 2020 will take place on 13 – 14 November 2020 – at the teams’ home bases. They will set up their infrastructure for the competition and film their races. Instead of starting directly next to each other, the pilots will start individually and under the supervision of CYBATHLON officials. From Zurich, the competitions will be broadcast through a new platform in a unique live programme.
[ Cybathlon ]
In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.
[ UZH ]
With the help of the software pitasc from Fraunhofer IPA, an assembly task is no longer programmed point by point, but workpiece-related. Thus, pitasc adapts the assembly process itself for new product variants with the help of updated parameters.
[ Fraunhofer ]
In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.
[ Yale ]
This work presents a novel loco-manipulation control framework for the execution of complex tasks with kinodynamic constraints using mobile manipulators. As a representative example, we consider the handling and re-positioning of pallet jacks in unstructured environments. While these results reveal with a proof-of- concept the effectiveness of the proposed framework, they also demonstrate the high potential of mobile manipulators for relieving human workers from such repetitive and labor intensive tasks. We believe that this extended functionality can contribute to increasing the usability of mobile manipulators in different application scenarios.
[ Paper ] via [ IIT ]
I don’t know why this dinosaur ice cream serving robot needs to blow smoke out of its nose, but I like it.
[ Connected Robotics ] via [ RobotStart ]
Guardian S remote visual inspection and surveillance robots make laying cable runs in confined or hard to reach spaces easy. With advanced maneuverability and the ability to climb vertical, ferrous surfaces, the robot reaches areas that are not always easily accessible.
[ Sarcos ]
Looks like the company that bought Anki is working on an add-on to let cars charge while they drive.
[ Digital Dream Labs ]
Chris Atkeson gives a brief talk for the CMU Robotics Institute orientation.
[ CMU RI ]
A UofT Robotics Seminar, featuring Russ Tedrake from MIT and TRI on “Feedback Control for Manipulation.”
Control theory has an answer for just about everything, but seems to fall short when it comes to closing a feedback loop using a camera, dealing with the dynamics of contact, and reasoning about robustness over the distribution of tasks one might find in the kitchen. Recent examples from RL and imitation learning demonstrate great promise, but don’t leverage the rigorous tools from systems theory. I’d like to discuss why, and describe some recent results of closing feedback loops from pixels for “category-level” robot manipulation.
[ UofT ] Continue reading →
#437716 Robotic Tank Is Designed to Crawl ...
Let’s talk about bowels! Most of us have them, most of us use them a lot, and like anything that gets used a lot, they eventually need to get checked out to help make sure that everything will keep working the way it should for as long as you need it to. Generally, this means a colonoscopy, and while there are other ways of investigating what’s going on in your gut, a camera on a flexible tube is still “the gold-standard method of diagnosis and intervention,” according to some robotics researchers who want to change that up a bit.
The University of Colorado’s Advanced Medical Technologies Lab has been working on a tank robot called Endoculus that’s able to actively drive itself through your intestines, rather than being shoved. The good news is that it’s very small, and the bad news is that it’s probably not as small as you’d like it to be.
The reason why a robot like Endoculus is necessary (or at least a good idea) is that trying to stuff a semi-rigid endoscopy tube into the semi-floppy tube that is your intestine doesn’t always go smoothly. Sometimes, the tip of the endoscopy tube can get stuck, and as more tube is fed in, it causes the intestine to distend, which best case is painful and worst case can cause serious internal injuries. One way of solving this is with swallowable camera pills, but those don’t help you with tasks like taking tissue samples. A self-propelled system like Endoculus could reduce risk while also making the procedure faster and cheaper.
Image: Advanced Medical Technologies Lab/University of Colorado
The researchers say that while the width of Endoculus is larger than a traditional endoscope, the device would require “minimal distention during use” and would “not cause pain or harm to the patient.” Future versions of the robot, they add, will “yield a smaller footprint.”
Endoculus gets around with four sets of treads, angled to provide better traction against the curved walls of your gut. The treads are micropillared, or covered with small nubs, which helps them deal with all your “slippery colon mucosa.” Designing the robot was particularly tricky because of the severe constraints on the overall size of the device, which is just 3 centimeters wide and 2.3 cm high. In order to cram the two motors required for full control, they had to be arranged parallel to the treads, resulting in a fairly complex system of 3D-printed worm gears. And to make the robot actually useful, it includes a camera, LED lights, tubes for injecting air and water, and a tool port that can accommodate endoscopy instruments like forceps and snares to retrieve tissue samples.
So far, Endoculus has spent some time inside of a live pig, although it wasn’t able to get that far since pig intestines are smaller than human intestines, and because apparently the pig intestine is spiraled somehow. The pig (and the robot) both came out fine. A (presumably different) pig then provided some intestine that was expanded to human-intestine size, inside of which Endoculus did much better, and was able to zip along at up to 40 millimeters per second without causing any damage. Personally, I’m not sure I’d want a robot to explore my intestine at a speed much higher than that.
The next step with Endoculus is to add some autonomy, which means figuring out how to do localization and mapping using the robot’s onboard camera and IMU. And then of course someone has to be the first human to experience Endoculus directly, which I’d totally volunteer for except the research team is in Colorado and I’m not. Sorry!
“Novel Optimization-Based Design and Surgical Evaluation of a Treaded Robotic Capsule Colonoscope,” by Gregory A. Formosa, J. Micah Prendergast, Steven A. Edmundowicz, and Mark E. Rentschler, from the University of Colorado, was presented at ICRA 2020.
< Back to IEEE Journal Watch Continue reading →