Tag Archives: really

#437769 Q&A: Facebook’s CTO Is at War With ...

Photo: Patricia de Melo Moreira/AFP/Getty Images

Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.

Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.

In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.

In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.

By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).

In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.

Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.

In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.

Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.

Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.

Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.

However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.

Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.

This interview has been edited and condensed for clarity.

IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?

Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.

The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.

There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.

My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.

Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?

Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.

Spectrum: How is that training done, and how did computer-vision models come to Facebook?

Image: Facebook

Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.

Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.

Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.

Spectrum: Do your AI systems work equally well on all types of prohibited content?

Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.

Spectrum: How much progress have you made on hate speech?

Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.

Image: Facebook

Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.

Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.

Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.

[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.

Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?

Image: Facebook

Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.

Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.

It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.

Spectrum: How is Facebook applying its AI tools to the problem of election interference?

Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.

On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.

Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.

There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.

I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.

Spectrum: What cutting-edge AI tools and methods have you been working on lately?

Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.

Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.

Spectrum: What else are you excited about?

Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”

We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.

To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.

When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.

Spectrum: How do you move new AI tools from your research labs into operational use?

Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.

Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?

Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.

As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.

We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.

Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading

Posted in Human Robots

#437758 Remotely Operated Robot Takes Straight ...

Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.

This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid.

Photo: John Peter Whitney/Northeastern University

Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].

Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.

Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.

Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.

If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”

And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”

He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”

The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.

Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv. Continue reading

Posted in Human Robots

#437751 Startup and Academics Find Path to ...

Engineers have been chasing a form of AI that could drastically lower the energy required to do typical AI things like recognize words and images. This analog form of machine learning does one of the key mathematical operations of neural networks using the physics of a circuit instead of digital logic. But one of the main things limiting this approach is that deep learning’s training algorithm, back propagation, has to be done by GPUs or other separate digital systems.

Now University of Montreal AI expert Yoshua Bengio, his student Benjamin Scellier, and colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves. That method, called equilibrium propagation, could lead to continuously learning, low-power analog systems of a far greater computational ability than most in the industry now consider possible, according to Rain CTO Jack Kendall.

Analog circuits could save power in neural networks in part because they can efficiently perform a key calculation, called multiply and accumulate. That calculation multiplies values from inputs according to various weights, and then it sums all those values up. Two fundamental laws of electrical engineering can basically do that, too. Ohm’s Law multiplies voltage and conductance to give current, and Kirchoff’s Current Law sums the currents entering a point. By storing a neural network’s weights in resistive memory devices, such as memristors, multiply-and-accumulate can happen completely in analog, potentially reducing power consumption by orders of magnitude.

The reason analog AI systems can’t train themselves today has a lot to do with the variability of their components. Just like real neurons, those in analog neural networks don’t all behave exactly alike. To do back propagation with analog components, you must build two separate circuit pathways. One going forward to come up with an answer (called inferencing), the other going backward to do the learning so that the answer becomes more accurate. But because of the variability of analog components, the pathways don't match up.

“You end up accumulating error as you go backwards through the network,” says Bengio. To compensate, a network would need lots of power-hungry analog-to-digital and digital-to-analog circuits, defeating the point of going analog.

Equilibrium propagation allows learning and inferencing to happen on the same network, partly by adjusting the behavior of the network as a whole. “What [equilibrium propagation] allows us to do is to say how we should modify each of these devices so that the overall circuit performs the right thing,” he says. “We turn the physical computation that is happening in the analog devices directly to our advantage.”

Right now, equilibrium propagation is only working in simulation. But Rain plans to have a hardware proof-of-principle in late 2021, according to CEO and cofounder Gordon Wilson. “We are really trying to fundamentally reimagine the hardware computational substrate for artificial intelligence, find the right clues from the brain, and use those to inform the design of this,” he says. The result could be what they call end-to-end analog AI systems that capable of running sophisticated robots or even playing a role in data centers. Both of those are currently considered beyond the capabilities of analog AI, which is now focused only on adding inferencing abilities to sensors and other low-power “edge” devices, while leaving the learning to GPUs. Continue reading

Posted in Human Robots

#437749 Video Friday: NASA Launches Its Most ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Virtual Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

Yesterday was a big day for what was quite possibly the most expensive robot on Earth up until it wasn’t on Earth anymore.

Perseverance and the Ingenuity helicopter are expected to arrive on Mars early next year.

[ JPL ]

ICYMI, our most popular post this week featured Northeastern University roboticist John Peter Whitney literally putting his neck on the line for science! He was testing a remotely operated straight razor shaving robotic system powered by fluidic actuators. The cutting-edge (sorry!) device transmits forces from a primary stage, operated by a barber, to a secondary stage, with the razor attached.

[ John Peter Whitney ]

Together with Boston Dynamics, Ford is introducing a pilot program into our Van Dyke Transmission Plant. Say hello to Fluffy the Robot Dog, who creates fast and accurate 3D scans that helps Ford engineers when we’re retooling our plants.

Not shown in the video: “At times, Fluffy sits on its robotic haunches and rides on the back of a small, round Autonomous Mobile Robot, known informally as Scouter. Scouter glides smoothly up and down the aisles of the plant, allowing Fluffy to conserve battery power until it’s time to get to work. Scouter can autonomously navigate facilities while scanning and capturing 3-D point clouds to generate a CAD of the facility. If an area is too tight for Scouter, Fluffy comes to the rescue.”

[ Ford ]

There is a thing that happens at 0:28 in this video that I have questions about.

[ Ghost Robotics ]

Pepper is far more polite about touching than most humans.

[ Paper ]

We don’t usually post pure simulation videos unless they give us something to get really, really excited about. So here’s a pure simulation video.

[ Hybrid Robotics ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ DRSL ]

HMI is making beastly electric arms work underwater, even if they’re not stapled to a robotic submarine.

[ HMI ]

Here’s some interesting work in progress from MIT’s Biomimetics Robotics Lab. The limb is acting as a “virtual magnet” using a bimodal force and direction sensor.

Thanks Peter!

[ MIT Biomimetics Lab ]

This is adorable but as a former rabbit custodian I can assure you that approximately 3 seconds after this video ended, all of the wires on that robot were chewed to bits.

[ Lingkang Zhang ]

During the ARCHE 2020 integration week, TNO and the ETH Robot System Lab (RSL) collaborated to integrate their research and development process using the Articulated Locomotion and MAnipulation (ALMA) robot. Next to the integration of software, we tested software to confirm proper implementation and development. We also captured visual and auditory data for future software development. This all resulted in the creation of multiple demo’s to show the capabilities of the teleoperation framework using the ALMA robot.

[ RSL ]

When we talk about practical applications quadrupedal robots with foot wheels, we don’t usually think about them on this scale, although we should.

[ RSL ]

Juan wrote in to share a DIY quadruped that he’s been working on, named CHAMP.

Juan says that the demo robot can be built in less than US $1000 with easily accessible parts. “I hope that my project can provide a more accessible platform for students, researchers, and enthusiasts who are interested to learn more about quadrupedal robot development and its underlying technology.”

[ CHAMP ]

Thanks Juan!

Here’s a New Zealand TV report about a study on robot abuse from Christoph Bartneck at the University of Canterbury.

[ Paper ]

Our Robotics Studio is a hands on class exposing students to practical aspects of the design, fabrication, and programming of physical robotic systems. So what happens when the class goes virtual due to the covid-19 virus? Things get physical — all @ home.

[ Columbia ]

A few videos from the Supernumerary Robotic Devices Workshop, held online earlier this month.

“Handheld Robots: Bridging the Gap between Fully External and Wearable Robots,” presented by Walterio Mayol-Cuevas, University of Bristol.

“Playing the Piano with 11 Fingers: The Neurobehavioural Constraints of Human Robot Augmentation,” presented by Aldo Faisal, Imperial College London.

[ Workshop ] Continue reading

Posted in Human Robots

#437723 Minuscule RoBeetle Turns Liquid Methanol ...

It’s no secret that one of the most significant constraints on robots is power. Most robots need lots of it, and it has to come from somewhere, with that somewhere usually being a battery because there simply aren’t many other good options. Batteries, however, are famous for having poor energy density, and the smaller your robot is, the more of a problem this becomes. And the issue with batteries goes beyond the battery itself, but also carries over into all the other components that it takes to turn the stored energy into useful work, which again is a particular problem for small-scale robots.

In a paper published this week in Science Robotics, researchers from the University of Southern California, in Los Angeles, demonstrate RoBeetle, an 88-milligram four legged robot that runs entirely on methanol, a power-dense liquid fuel. Without any electronics at all, it uses an exceptionally clever bit of mechanical autonomy to convert methanol vapor directly into forward motion, one millimeter-long step at a time.

It’s not entirely clear from the video how the robot actually works, so let’s go through how it’s put together, and then look at the actuation cycle.

Image: Science Robotics

RoBeetle (A) uses a methanol-based actuation mechanism (B). The robot’s body (C) includes the fuel tank subassembly (D), a tank lid, transmission, and sliding shutter (E), bottom side of the sliding shutter (F), nickel-titanium-platinum composite wire and leaf spring (G), and front legs and hind legs with bioinspired backward-oriented claws (H).

The body of RoBeetle is a boxy fuel tank that you can fill with methanol by poking a syringe through a fuel inlet hole. It’s a quadruped, more or less, with fixed hind legs and two front legs attached to a single transmission that moves them both at once in a sort of rocking forward and up followed by backward and down motion. The transmission is hooked up to a leaf spring that’s tensioned to always pull the legs backward, such that when the robot isn’t being actuated, the spring and transmission keep its front legs more or less vertical and allow the robot to stand. Those horns are primarily there to hold the leaf spring in place, but they’ve got little hooks that can carry stuff, too.

The actuator itself is a nickel-titanium (NiTi) shape-memory alloy (SMA), which is just a wire that gets longer when it heats up and then shrinks back down when it cools. SMAs are fairly common and used for all kinds of things, but what makes this particular SMA a little different is that it’s been messily coated with platinum. The “messily” part is important for a reason that we’ll get to in just a second.

The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

One end of the SMA wire is attached to the middle of the leaf spring, while the other end runs above the back of the robot where it’s stapled to an anchor block on the robot’s rear end. With the SMA wire hooked up but not actuated (i.e., cold rather than warm), it’s short enough that the leaf spring gets pulled back, rocking the legs forward and up. The last component is embedded in the robot’s back, right along the spine and directly underneath the SMA actuator. It’s a sliding vent attached to the transmission, so that the vent is open when the SMA wire is cold and the leaf spring is pulled back, and closed when the SMA wire is warm and the leaf spring is relaxed. The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

The actuation cycle that causes the robot to walk begins with a full fuel tank and a cold SMA wire. There’s tension on the leaf spring, pulling the transmission back and rocking the legs forward and upward. The transmission also pulls the sliding vent into the open position, allowing methanol vapor to escape up out of the fuel tank and into the air, where it wafts past the SMA wire that runs directly above the vent.

The platinum facilitates a reaction of the methanol (CH3OH) with oxygen in the air (combustion, although not the dramatic flaming and explosive kind) to generate a couple of water molecules and some carbon dioxide plus a bunch of heat, and this is where the messy platinum coating is important, because messy means lots of surface area for the platinum to interact with as much methanol as possible. In just a second or two the temperature of the SMA wire skyrockets from 50 to 100 ºC and it expands, allowing the leaf spring about 0.1 mm of slack. As the leaf spring relaxes, the transmission moves the legs backwards and downwards, and the robot pulls itself forward about 1.2 mm. At the same time, the transmission is closing off the sliding vent, cutting off the supply of methanol vapor. Without the vapor reacting with the platinum and generating heat, in about a second and a half, the SMA wire cools down. As it does, it shrinks, pulling on the leaf spring and starting the cycle over again. Top speed is 0.76 mm/s (0.05 body-lengths per second).

An interesting environmental effect is that the speed of the robot can be enhanced by a gentle breeze. This is because air moving over the SMA wire cools it down a bit faster while also blowing away any residual methanol from around the vents, shutting down the reaction more completely. RoBeetle can carry more than its own body weight in fuel, and it takes approximately 155 minutes for a full tank of methanol to completely evaporate. It’s worth noting that despite the very high energy density of methanol, this is actually a stupendously inefficient way of powering a robot, with an estimated end-to-end efficiency of just 0.48 percent. Not 48 percent, mind you, but 0.48 percent, while in general, powering SMAs with electricity is much more efficient.

However, you have to look at the entire system that would be necessary to deliver that electricity, and for a robot as small as RoBeetle, the researchers say that it’s basically impossible. The lightest commercially available battery and power supply that would deliver enough juice to heat up an SMA actuator weighs about 800 mg, nearly 10 times the total weight of RoBeetle itself. From that perspective, RoBeetle’s efficiency is actually pretty good.

Image: A. Kitterman/Science Robotics; adapted from R.L.T./MIT

Comparison of various untethered microrobots and bioinspired soft robots that use different power and actuation strategies.

There are some other downsides to RoBeetle we should mention—it can only move forwards, not backwards, and it can’t steer. Its speed isn’t adjustable, and once it starts walking, it’ll walk until it either breaks or runs out of fuel. The researchers have some ideas about the speed, at least, pointing out that increasing the speed of fuel delivery by using pressurized liquid fuels like butane or propane would increase the actuator output frequency. And the frequency, amplitude, and efficiency of the SMAs themselves can be massively increased “by arranging multiple fiber-like thin artificial muscles in hierarchical configurations similar to those observed in sarcomere-based animal muscle,” making RoBeetle even more beetle-like.

As for sensing, RoBeetle’s 230-mg payload is enough to carry passive sensors, but getting those sensors to usefully interact with the robot itself to enable any kind of autonomy remains a challenge. Mechanically intelligence is certainly possible, though, and we can imagine RoBeetle adopting some of the same sorts of systems that have been proposed for the clockwork rover that JPL wants to use for Venus exploration. The researchers also mention how RoBeetle could potentially serve as a model for microbots capable of aerial locomotion, which is something we’d very much like to see.

“An 88-milligram insect-scale autonomous crawling robot driven by a catalytic artificial muscle,” by Xiufeng Yang, Longlong Chang, and Néstor O. Pérez-Arancibia from University of Southern California, in Los Angeles, was published in Science Robotics. Continue reading

Posted in Human Robots