Tag Archives: machine

#437776 Video Friday: This Terrifying Robot Will ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today's videos.

The Aigency, which created the FitBot launch video below, is “the world’s first talent management resource for robotic personalities.”

Robots will be playing a bigger role in our lives in the future. By learning to speak their language and work with them now, we can make this future better for everybody. If you’re a creator that’s producing content to entertain and educate people, robots can be a part of that. And we can help you. Robotic actors can show up alongside the rest of your actors.

The folks at Aigency have put together a compilation reel of clips they’ve put on TikTok, which is nice of them, because some of us don’t know how to TikTok because we’re old and boring.

Do googly eyes violate the terms and conditions?

[ Aigency ]

Shane Wighton of the “Stuff Made Here” YouTube channel, who you might remember from that robotic basketball hoop, has a new invention: A haircut robot. This is not the the first barber bot, but previous designs typically used hair clippers. Shane wanted his robot to use scissors. Hilarious and terrifying at once.

[ Stuff Made Here ]

Starting in October of 2016, Prof. Charlie Kemp and Henry M. Clever invented a new kind of robot. They named the prototype NewRo. In March of 2017, Prof. Kemp filmed this video of Henry operating NewRo to perform a number of assistive tasks. While visiting the Bay Area for a AAAI Symposium workshop at Stanford, Prof. Kemp showed this video to a select group of people to get advice, including Dr. Aaron Edsinger. In August of 2017, Dr. Edsinger and Dr. Kemp founded Hello Robot Inc. to commercialize this patent pending assistive technology. Hello Robot Inc. licensed the intellectual property (IP) from Georgia Tech. After three years of stealthy effort, Hello Robot Inc. revealed Stretch, a new kind of robot!

[ Georgia Tech ]

NASA’s Ingenuity Mars Helicopter will make history's first attempt at powered flight on another planet next spring. It is riding with the agency's next mission to Mars (the Mars 2020 Perseverance rover) as it launches from Cape Canaveral Air Force Station later this summer. Perseverance, with Ingenuity attached to its belly, will land on Mars February 18, 2021.

[ JPL ]

For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion. A group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s new system uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance. In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.

[ MIT ]

Gripping robots typically have troubles grabbing transparent or shiny objects. A new technique by Carnegie Mellon University relies on color camera system and machine learning to recognize shapes based on color.

[ CMU ]

A new robotic prosthetic leg prototype offers a more natural, comfortable gait while also being quieter and more energy efficient than other designs. The key is the use of new small and powerful motors with fewer gears, borrowed from the space industry. This streamlined technology enables a free-swinging knee and regenerative braking, which charges the battery during use with energy that would typically be dissipated when the foot hits the ground. This feature enables the leg to more than double a typical prosthetic user's walking needs with one charge per day.

[ University of Michigan ]

Thanks Kate!

This year’s Wonder League teams have been put to the test not only with the challenges set forth by Wonder Workshop and Cartoon Network as they look to help the creek kids from Craig of the Creek solve the greatest mystery of all – the quest for the Lost Realm but due to forces outside their control. With a global pandemic displacing many teams from one another due to lockdowns and quarantines, these teams continued to push themselves to find new ways to work together, solve problems, communicate more effectively, and push themselves to complete a journey that they started and refused to give up on. We at Wonder Workshop are humbled and in awe of all these teams have accomplished.

[ Wonder Workshop ]

Thanks Nicole!

Meet Colin Creager, a mechanical engineer at NASA's Glenn Research Center. Colin is focusing on developing tires that can be used on other worlds. These tires use coil springs made of a special shape memory alloy that will let rovers move across sharp jagged rocks or through soft sand on the Moon or Mars.

[ NASA ]

To be presented at IROS this year, “the first on robot collision detection system using low cost microphones.”

[ Rutgers ]

Robot and mechanism designs inspired by the art of Origami have the potential to generate compact, deployable, lightweight morphing structures, as seen in nature, for potential applications in search-and-rescue, aerospace systems, and medical devices. However, it is challenging to obtain actuation that is easily patternable, reversible, and made with a scalable manufacturing process for origami-inspired self-folding machines. In this work, we describe an approach to design reversible self-folding machines using liquid crystal elastomer (LCE), that contracts when heated, as an artificial muscle.

[ UCSD ]

Just in case you need some extra home entertainment, and you’d like cleaner floors at the same time.

[ iRobot ]

Sure, toss it from a drone. Or from orbit. Whatever, it’s squishy!

[ Squishy Robotics ]

The [virtual] RSS conference this week featured an excellent lineup of speakers and panels, and the best part about it being virtual is that you can watch them all at your leisure! Here’s what’s been posted so far:

[ RSS 2020 ]

Lockheed Martin Robotics Seminar: Toward autonomous flying insect-sized robots: recent results in fabrication, design, power systems, control, and sensing with Sawyer Fuller.

[ UMD ]

In this episode of the AI Podcast, Lex interviews Sergey Levine.

[ AI Podcast ] Continue reading

Posted in Human Robots

#437769 Q&A: Facebook’s CTO Is at War With ...

Photo: Patricia de Melo Moreira/AFP/Getty Images

Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.

Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.

In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.

In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.

By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).

In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.

Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.

In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.

Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.

Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.

Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.

However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.

Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.

This interview has been edited and condensed for clarity.

IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?

Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.

The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.

There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.

My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.

Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?

Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.

Spectrum: How is that training done, and how did computer-vision models come to Facebook?

Image: Facebook

Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.

Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.

Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.

Spectrum: Do your AI systems work equally well on all types of prohibited content?

Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.

Spectrum: How much progress have you made on hate speech?

Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.

Image: Facebook

Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.

Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.

Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.

[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.

Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?

Image: Facebook

Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.

Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.

It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.

Spectrum: How is Facebook applying its AI tools to the problem of election interference?

Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.

On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.

Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.

There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.

I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.

Spectrum: What cutting-edge AI tools and methods have you been working on lately?

Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.

Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.

Spectrum: What else are you excited about?

Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”

We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.

To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.

When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.

Spectrum: How do you move new AI tools from your research labs into operational use?

Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.

Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?

Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.

As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.

We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.

Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading

Posted in Human Robots

#437765 Video Friday: Massive Robot Joins ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

Here are some professional circus artists messing around with an industrial robot for fun, like you do.

The acrobats are part of Östgötateatern, a Swedish theatre group, and the chair bit got turned into its own act, called “The Last Fish.” But apparently the Swedish Work Environment Authority didn’t like that an industrial robot—a large ABB robotic arm—was being used in an artistic performance, arguing that the same safety measures that apply in a factory setting would apply on stage. In other words, the robot had to operate inside a protective cage and humans could not physically interact with it.

When told that their robot had to be removed, the acrobats went to court. And won! At least that’s what we understand from this Swedish press release. The court in Linköping, in southern Sweden, ruled that the safety measures taken by the theater had been sufficient. The group had worked with a local robotics firm, Dyno Robotics, to program the manipulator and learn how to interact with it as safely as possible. The robot—which the acrobats say is the eighth member of their troupe—will now be allowed to return.

[ Östgötateatern ]

Houston Mechathronics’ Aquanaut continues to be awesome, even in the middle of a pandemic. It’s taken the big step (big swim?) out of NASA’s swimming pool and into open water.

[ HMI ]

Researchers from Carnegie Mellon University and Facebook AI Research have created a navigation system for robots powered by common sense. The technique uses machine learning to teach robots how to recognize objects and understand where they’re likely to be found in house. The result allows the machines to search more strategically.

[ CMU ]

Cassie manages 2.1 m/s, which is uncomfortably fast in a couple of different ways.

Next, untethered. After that, running!

[ Michigan Robotics ]

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.

Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).

To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS ’18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called “Global-to-Local Safe Autonomy Synthesis,” or GLAS, which imitates a complete-information planner with only local information, and “Neural-Swarm,” a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.

[ Caltech ]

Fetch Robotics’ Freight robot is now hauling around pulsed xenon UV lamps to autonomously disinfect spaces with UV-A, UV-B, and UV-C, all at the same time.

[ SmartGuard UV ]

When you’re a vertically symmetrical quadruped robot, there is no upside-down.

[ Ghost Robotics ]

In the virtual world, the objects you pick up do not exist: you can see that cup or pen, but it does not feel like you’re touching them. That presented a challenge to EPFL professor Herbert Shea. Drawing on his extensive experience with silicone-based muscles and motors, Shea wanted to find a way to make virtual objects feel real. “With my team, we’ve created very small, thin and fast actuators,” explains Shea. “They are millimeter-sized capsules that use electrostatic energy to inflate and deflate.” The capsules have an outer insulating membrane made of silicone enclosing an inner pocket filled with oil. Each bubble is surrounded by four electrodes, that can close like a zipper. When a voltage is applied, the electrodes are pulled together, causing the center of the capsule to swell like a blister. It is an ingenious system because the capsules, known as HAXELs, can move not only up and down, but also side to side and around in a circle. “When they are placed under your fingers, it feels as though you are touching a range of different objects,” says Shea.

[ EPFL ]

Through the simple trick of reversing motors on impact, a quadrotor can land much more reliably on slopes.

[ Sherbrooke ]

Turtlebot delivers candy at Harvard.

I <3 Turtlebot SO MUCH

[ Harvard ]

Traditional drone controllers are a little bit counterintuitive, because there’s one stick that’s forwards and backwards and another stick that’s up and down but they’re both moving on the same axis. How does that make sense?! Here’s a remote that gives you actual z-axis control instead.

[ Fenics ]

Thanks Ashley!

Lio is a mobile robot platform with a multifunctional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis.

[ F&P Robotics ]

Video shows a ground vehicle autonomously exploring and mapping a multi-storage garage building and a connected patio on Carnegie Mellon University campus. The vehicle runs onboard state estimation and mapping leveraging range, vision, and inertial sensing, local planning for collision avoidance, and terrain analysis. All processing is real-time and no post-processing involved. The vehicle drives at 2m/s through the exploration run. This work is dedicated to DARPA Subterranean Challange.

[ CMU ]

Raytheon UK’s flagship STEM programme, the Quadcopter Challenge, gives 14-15 year olds the chance to participate in a hands-on, STEM-based engineering challenge to build a fully operational quadcopter. Each team is provided with an identical kit of parts, tools and instructions to build and customise their quadcopter, whilst Raytheon UK STEM Ambassadors provide mentoring, technical support and deliver bite-size learning modules to support the build.

[ Raytheon ]

A video on some of the research work that is being carried out at The Australian Centre for Field Robotics, University of Sydney.

[ University of Sydney ]

Jeannette Bohg, assistant professor of computer science at Stanford University, gave one of the Early Career Award Keynotes at RSS 2020.

[ RSS 2020 ]

Adam Savage remembers Grant Imahara.

[ Tested ] Continue reading

Posted in Human Robots

#437763 Peer Review of Scholarly Research Gets ...

In the world of academics, peer review is considered the only credible validation of scholarly work. Although the process has its detractors, evaluation of academic research by a cohort of contemporaries has endured for over 350 years, with “relatively minor changes.” However, peer review may be set to undergo its biggest revolution ever—the integration of artificial intelligence.

Open-access publisher Frontiers has debuted an AI tool called the Artificial Intelligence Review Assistant (AIRA), which purports to eliminate much of the grunt work associated with peer review. Since the beginning of June 2020, every one of the 11,000-plus submissions Frontiers received has been run through AIRA, which is integrated into its collaborative peer-review platform. This also makes it accessible to external users, accounting for some 100,000 editors, authors, and reviewers. Altogether, this helps “maximize the efficiency of the publishing process and make peer-review more objective,” says Kamila Markram, founder and CEO of Frontiers.

AIRA’s interactive online platform, which is a first of its kind in the industry, has been in development for three years.. It performs three broad functions, explains Daniel Petrariu, director of project management: assessing the quality of the manuscript, assessing quality of peer review, and recommending editors and reviewers. At the initial validation stage, the AI can make up to 20 recommendations and flag potential issues, including language quality, plagiarism, integrity of images, conflicts of interest, and so on. “This happens almost instantly and with [high] accuracy, far beyond the rate at which a human could be expected to complete a similar task,” Markram says.

“We have used a wide variety of machine-learning models for a diverse set of applications, including computer vision, natural language processing, and recommender systems,” says Markram. This includes simple bag-of-words models, as well as more sophisticated deep-learning ones. AIRA also leverages a large knowledge base of publications and authors.

Markram notes that, to address issues of possible AI bias, “We…[build] our own datasets and [design] our own algorithms. We make sure no statistical biases appear in the sampling of training and testing data. For example, when building a model to assess language quality, scientific fields are equally represented so the model isn’t biased toward any specific topic.” Machine- and deep-learning approaches, along with feedback from domain experts, including errors, are captured and used as additional training data. “By regularly re-training, we make sure our models improve in terms of accuracy and stay up-to-date.”

The AI’s job is to flag concerns; humans take the final decisions, says Petrariu. As an example, he cites image manipulation detection—something AI is super-efficient at but is nearly impossible for a human to perform with the same accuracy. “About 10 percent of our flagged images have some sort of problem,” he adds. “[In academic publishing] nobody has done this kind of comprehensive check [using AI] before,” says Petrariu. AIRA, he adds, facilitates Frontiers’ mission to make science open and knowledge accessible to all. Continue reading

Posted in Human Robots

#437758 Remotely Operated Robot Takes Straight ...

Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.

This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid.

Photo: John Peter Whitney/Northeastern University

Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].

Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.

Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.

Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.

If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”

And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”

He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”

The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.

Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv. Continue reading

Posted in Human Robots