Tag Archives: camera

#437709 iRobot Announces Major Software Update, ...

Since the release of the very first Roomba in 2002, iRobot’s long-term goal has been to deliver cleaner floors in a way that’s effortless and invisible. Which sounds pretty great, right? And arguably, iRobot has managed to do exactly this, with its most recent generation of robot vacuums that make their own maps and empty their own dustbins. For those of us who trust our robots, this is awesome, but iRobot has gradually been realizing that many Roomba users either don’t want this level of autonomy, or aren’t ready for it.

Today, iRobot is announcing a major new update to its app that represents a significant shift of its overall approach to home robot autonomy. Humans are being brought back into the loop through software that tries to learn when, where, and how you clean so that your Roomba can adapt itself to your life rather than the other way around.

To understand why this is such a shift for iRobot, let’s take a very brief look back at how the Roomba interface has evolved over the last couple of decades. The first generation of Roomba had three buttons on it that allowed (or required) the user to select whether the room being vacuumed was small or medium or large in size. iRobot ditched that system one generation later, replacing the room size buttons with one single “clean” button. Programmable scheduling meant that users no longer needed to push any buttons at all, and with Roombas able to find their way back to their docking stations, all you needed to do was empty the dustbin. And with the most recent few generations (the S and i series), the dustbin emptying is also done for you, reducing direct interaction with the robot to once a month or less.

Image: iRobot

iRobot CEO Colin Angle believes that working toward more intelligent human-robot collaboration is “the brave new frontier” of AI. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” he says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The point that the top-end Roombas are at now reflects a goal that iRobot has been working toward since 2002: With autonomy, scheduling, and the clean base to empty the bin, you can set up your Roomba to vacuum when you’re not home, giving you cleaner floors every single day without you even being aware that the Roomba is hard at work while you’re out. It’s not just hands-off, it’s brain-off. No noise, no fuss, just things being cleaner thanks to the efforts of a robot that does its best to be invisible to you. Personally, I’ve been completely sold on this idea for home robots, and iRobot CEO Colin Angle was as well.

“I probably told you that the perfect Roomba is the Roomba that you never see, you never touch, you just come home everyday and it’s done the right thing,” Angle told us. “But customers don’t want that—they want to be able to control what the robot does. We started to hear this a couple years ago, and it took a while before it sunk in, but it made sense.”

How? Angle compares it to having a human come into your house to clean, but you weren’t allowed to tell them where or when to do their job. Maybe after a while, you’ll build up the amount of trust necessary for that to work, but in the short term, it would likely be frustrating. And people get frustrated with their Roombas for this reason. “The desire to have more control over what the robot does kept coming up, and for me, it required a pretty big shift in my view of what intelligence we were trying to build. Autonomy is not intelligence. We need to do something more.”

That something more, Angle says, is a partnership as opposed to autonomy. It’s an acknowledgement that not everyone has the same level of trust in robots as the people who build them. It’s an understanding that people want to have a feeling of control over their homes, that they have set up the way that they want, and that they’ve been cleaning the way that they want, and a robot shouldn’t just come in and do its own thing.

This change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware.

“Until the robot proves that it knows enough about your home and about the way that you want your home cleaned,” Angle says, “you can’t move forward.” He adds that this is one of those things that seem obvious in retrospect, but even if they’d wanted to address the issue before, they didn’t have the technology to solve the problem. Now they do. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” Angle says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

Where to Clean
Knowing where to clean depends on your Roomba having a detailed and accurate map of its environment. For several generations now, Roombas have been using visual mapping and localization (VSLAM) to build persistent maps of your home. These maps have been used to tell the Roomba to clean in specific rooms, but that’s about it. With the new update, Roombas with cameras will be able to recognize some objects and features in your home, including chairs, tables, couches, and even countertops. The robots will use these features to identify where messes tend to happen so that they can focus on those areas—like around the dining room table or along the front of the couch.

We should take a minute here to clarify how the Roomba is using its camera. The original (primary?) purpose of the camera was for VSLAM, where the robot would take photos of your home, downsample them into QR-code-like patterns of light and dark, and then use those (with the assistance of other sensors) to navigate. Now the camera is also being used to take pictures of other stuff around your house to make that map more useful.

Photo: iRobot

The robots will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table.

This is done through machine learning using a library of images of common household objects from a floor perspective that iRobot had to develop from scratch. Angle clarified for us that this is all done via a neural net that runs on the robot, and that “no recognizable images are ever stored on the robot or kept, and no images ever leave the robot.” Worst case, if all the data iRobot has about your home gets somehow stolen, the hacker would only know that (for example) your dining room has a table in it and the approximate size and location of that table, because the map iRobot has of your place only stores symbolic representations rather than images.

Another useful new feature is intended to help manage the “evil Roomba places” (as Angle puts it) that every home has that cause Roombas to get stuck. If the place is evil enough that Roomba has to call you for help because it gave up completely, Roomba will now remember, and suggest that either you make some changes or that it stops cleaning there, which seems reasonable.

When to Clean
It turns out that the primary cause of mission failure for Roombas is not that they get stuck or that they run out of battery—it’s user cancellation, usually because the robot is getting in the way or being noisy when you don’t want it to be. “If you kill a Roomba’s job because it annoys you,” points out Angle, “how is that robot being a good partner? I think it’s an epic fail.” Of course, it’s not the robot’s fault, because Roombas only clean when we tell them to, which Angle says is part of the problem. “People actually aren’t very good at making their own schedules—they tend to oversimplify, and not think through what their schedules are actually about, which leads to lots of [figurative] Roomba death.”

To help you figure out when the robot should actually be cleaning, the new app will look for patterns in when you ask the robot to clean, and then recommend a schedule based on those patterns. That might mean the robot cleans different areas at different times every day of the week. The app will also make scheduling recommendations that are event-based as well, integrated with other smart home devices. Would you prefer the Roomba to clean every time you leave the house? The app can integrate with your security system (or garage door, or any number of other things) and take care of that for you.

More generally, Roomba will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table. The app will also, to some extent, pay attention to the environment and season. It might suggest increasing your vacuuming frequency if pollen counts are especially high, or if it’s pet shedding season and you have a dog. Unfortunately, Roomba isn’t (yet?) capable of recognizing dogs on its own, so the app has to cheat a little bit by asking you some basic questions.

A Smarter App

Image: iRobot

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

The app update, which should be available starting today, is free. The scheduling and recommendations will work on every Roomba model, although for object recognition and anything related to mapping, you’ll need one of the more recent and fancier models with a camera. Future app updates will happen on a more aggressive schedule. Major app releases should happen every six months, with incremental updates happening even more frequently than that.

Angle also told us that overall, this change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware. “It’s not like we’re done doing hardware,” Angle assured us. “But we do think about hardware differently. We view our robots as platforms that have longer life cycles, and each platform will be able to support multiple generations of software. We’ve kind of decoupled robot intelligence from hardware, and that’s a change.”

Angle believes that working toward more intelligent collaboration between humans and robots is “the brave new frontier of artificial intelligence. I expect it to be the frontier for a reasonable amount of time to come,” he adds. “We have a lot of work to do to create the type of easy-to-use experience that consumer robots need.” Continue reading

Posted in Human Robots

#437683 iRobot Remembers That Robots Are ...

iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.

If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.

iRobot calls the i3 “stylish,” and it does look pretty neat with that fabric top. Underneath, you get dual rubber primary brushes plus a side brush. There’s limited compatibility with the iRobot Home app and IFTTT, along with Alexa and Google Home. The i3 is also compatible with iRobot’s Clean Base, but that’ll cost you an extra $200, and iRobot refers to this bundle as the i3+.

The reason that the i3 only offers limited compatibility with iRobot’s app is that the i3 is missing the top-mounted camera that you’ll find in more expensive models. Instead, it relies on a downward-looking optical sensor to help it navigate, and it builds up a map as it’s cleaning by keeping track of when it bumps into obstacles and paying attention to internal sensors like a gyro and wheel odometers. The i3 can localize directly on its charging station or Clean Base (which have beacons on them that the robot can see if it’s close enough), which allows it to resume cleaning after emptying it’s bin or recharging. You’ll get a map of the area that the i3 has cleaned once it’s finished, but that map won’t persist between cleaning sessions, meaning that you can’t do things like set keep-out zones or identify specific rooms for the robot to clean. Many of the more useful features that iRobot’s app offers are based on persistent maps, and this is probably the biggest gap in functionality between the i3 and its more expensive siblings.

According to iRobot senior global product manager Sarah Wang, the kind of augmented dead-reckoning-based mapping that the i3 uses actually works really well: “Based on our internal and external testing, the performance is equivalent with our products that have cameras, like the Roomba 960,” she says. To get this level of performance, though, you do have to be careful, Wang adds. “If you kidnap i3, then it will be very confused, because it doesn’t have a reference to know where it is.” “Kidnapping” is a term that’s used often in robotics to refer to a situation in which an autonomous robot gets moved to an unmapped location, and in the context of a home robot, the best example of this is if you decide that you want your robot to vacuum a different room instead, so you pick it up and move it there.

iRobot used to make this easy by giving all of its robots carrying handles, but not anymore, because getting moved around makes things really difficult for any robot trying to keep track of where it is. While robots like the i7 can recover using their cameras to look for unique features that they recognize, the only permanent, unique landmark that the i3 can for sure identify is the beacon on its dock. What this means is that when it comes to the i3, even more than other Roomba models, the best strategy, is to just “let it do its thing,” says iRobot senior principal system engineer Landon Unninayar.

Photo: iRobot

The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400.

If you’re looking to spend a bit less than the $400 starting price of the i3, there are other options to be aware of as well. The Roomba 614, for example, does a totally decent job and costs $250. It’s scheduling isn’t very clever, it doesn’t make maps, and it won’t empty itself, but it will absolutely help keep your floors clean as long as you don’t mind being a little bit more hands-on. (And there’s also Neato’s D4, which offers basic persistent maps—and lasers!—for $330.)

The other thing to consider if you’re trying to decide between the i3 and a more expensive Roomba is that without the camera, the i3 likely won’t be able to take advantage of nearly as many of the future improvements that iRobot has said it’s working on. Spending more money on a robot with additional sensors isn’t just buying what it can do now, but also investing in what it may be able to do later on, with its more sophisticated localization and ability to recognize objects. iRobot has promised major app updates every six months, and our guess is that most of the cool new stuff is going to show in the i7 and s9. So, if your top priority is just cleaner floors, the i3 is a solid choice. But if you want a part of what iRobot is working on next, the i3 might end up holding you back. Continue reading

Posted in Human Robots

#437645 How Robots Became Essential Workers in ...

Photo: Sivaram V/Reuters

A robot, developed by Asimov Robotics to spread awareness about the coronavirus, holds a tray with face masks and sanitizer.

As the coronavirus emergency exploded into a full-blown pandemic in early 2020, forcing countless businesses to shutter, robot-making companies found themselves in an unusual situation: Many saw a surge in orders. Robots don’t need masks, can be easily disinfected, and, of course, they don’t get sick.

An army of automatons has since been deployed all over the world to help with the crisis: They are monitoring patients, sanitizing hospitals, making deliveries, and helping frontline medical workers reduce their exposure to the virus. Not all robots operate autonomously—many, in fact, require direct human supervision, and most are limited to simple, repetitive tasks. But robot makers say the experience they’ve gained during this trial-by-fire deployment will make their future machines smarter and more capable. These photos illustrate how robots are helping us fight this pandemic—and how they might be able to assist with the next one.

DROID TEAM

Photo: Clement Uwiringiyimana/Reuters

A squad of robots serves as the first line of defense against person-to-person transmission at a medical center in Kigali, Rwanda. Patients walking into the facility get their temperature checked by the machines, which are equipped with thermal cameras atop their heads. Developed by UBTech Robotics, in China, the robots also use their distinctive appearance—they resemble characters out of a Star Wars movie—to get people’s attention and remind them to wash their hands and wear masks.

Photo: Clement Uwiringiyimana/Reuters

SAY “AAH”
To speed up COVID-19 testing, a team of Danish doctors and engineers at the University of Southern Denmark and at Lifeline Robotics is developing a fully automated swab robot. It uses computer vision and machine learning to identify the perfect target spot inside the person’s throat; then a robotic arm with a long swab reaches in to collect the sample—all done with a swiftness and consistency that humans can’t match. In this photo, one of the creators, Esben Østergaard, puts his neck on the line to demonstrate that the robot is safe.

Photo: University of Southern Denmark

GERM ZAPPER
After six of its doctors became infected with the coronavirus, the Sassarese hospital in Sardinia, Italy, tightened its safety measures. It also brought in the robots. The machines, developed by UVD Robots, use lidar to navigate autonomously. Each bot carries an array of powerful short-wavelength ultraviolet-C lights that destroy the genetic material of viruses and other pathogens after a few minutes of exposure. Now there is a spike in demand for UV-disinfection robots as hospitals worldwide deploy them to sterilize intensive care units and operating theaters.

Photo: UVD Robots

RUNNING ERRANDS

In medical facilities, an ideal role for robots is taking over repetitive chores so that nurses and physicians can spend their time doing more important tasks. At Shenzhen Third People’s Hospital, in China, a robot called Aimbot drives down the hallways, enforcing face-mask and social-distancing rules and spraying disinfectant. At a hospital near Austin, Texas, a humanoid robot developed by Diligent Robotics fetches supplies and brings them to patients’ rooms. It repeats this task day and night, tirelessly, allowing the hospital staff to spend more time interacting with patients.

Photos, left: Diligent Robotics; Right: UBTech Robotics

THE DOCTOR IS IN
Nurses and doctors at Circolo Hospital in Varese, in northern Italy—the country’s hardest-hit region—use robots as their avatars, enabling them to check on their patients around the clock while minimizing exposure and conserving protective equipment. The robots, developed by Chinese firm Sanbot, are equipped with cameras and microphones and can also access patient data like blood oxygen levels. Telepresence robots, originally designed for offices, are becoming an invaluable tool for medical workers treating highly infectious diseases like COVID-19, reducing the risk that they’ll contract the pathogen they’re fighting against.

Photo: Miguel Medina/AFP/Getty Images

HELP FROM ABOVE

Photo: Zipline

Authorities in several countries attempted to use drones to enforce lockdowns and social-distancing rules, but the effectiveness of such measures remains unclear. A better use of drones was for making deliveries. In the United States, startup Zipline deployed its fixed-wing autonomous aircraft to connect two medical facilities 17 kilometers apart. For the staff at the Huntersville Medical Center, in North Carolina, masks, gowns, and gloves literally fell from the skies. The hope is that drones like Zipline’s will one day be able to deliver other kinds of critical materials, transport test samples, and distribute drugs and vaccines.

Photos: Zipline

SPECIAL DELIVERY
It’s not quite a robot takeover, but the streets and sidewalks of dozens of cities around the world have seen a proliferation of hurrying wheeled machines. Delivery robots are now in high demand as online orders continue to skyrocket.

In Hamburg, the six-wheeled robots developed by Starship Technologies navigate using cameras, GPS, and radar to bring groceries to customers.

Photo: Christian Charisius/Picture Alliance/Getty Images

In Medellín, Colombia, a startup called Rappi deployed a fleet of robots, built by Kiwibot, to deliver takeout to people in lockdown.

Photo: Joaquin Sarmiento/AFP/Getty Images

China’s JD.com, one of the country’s largest e-commerce companies, is using 20 robots to transport goods in Changsha, Hunan province; each vehicle has 22 separate compartments, which customers unlock using face authentication.

Photos: TPG/Getty Images

LIFE THROUGH ROBOTS
Robots can’t replace real human interaction, of course, but they can help people feel more connected at a time when meetings and other social activities are mostly on hold.

In Ostend, Belgium, ZoraBots brought one of its waist-high robots, equipped with cameras, microphones, and a screen, to a nursing home, allowing residents like Jozef Gouwy to virtually communicate with loved ones despite a ban on in-person visits.

Photo: Yves Herman/Reuters

In Manila, nearly 200 high school students took turns “teleporting” into a tall wheeled robot, developed by the school’s robotics club, to walk on stage during their graduation ceremony.

Photo: Ezra Acayan/Getty Images

And while Japan’s Chiba Zoological Park was temporarily closed due to the pandemic, the zoo used an autonomous robotic vehicle called RakuRo, equipped with 360-degree cameras, to offer virtual tours to children quarantined at home.

Photo: Tomohiro Ohsumi/Getty Images

SENTRY ROBOTS
Offices, stores, and medical centers are adopting robots as enforcers of a new coronavirus code.

At Fortis Hospital in Bangalore, India, a robot called Mitra uses a thermal camera to perform a preliminary screening of patients.

Photo: Manjunath Kiran/AFP/Getty Images

In Tunisia, the police use a tanklike robot to patrol the streets of its capital city, Tunis, verifying that citizens have permission to go out during curfew hours.

Photo: Khaled Nasraoui/Picture Alliance/Getty Images

And in Singapore, the Bishan-Ang Moh Kio Park unleashed a Spot robot dog, developed by Boston Dynamics, to search for social-distancing violators. Spot won’t bark at them but will rather play a recorded message reminding park-goers to keep their distance.

Photo: Roslan Rahman/AFP/Getty Images

This article appears in the October 2020 print issue as “How Robots Became Essential Workers.” Continue reading

Posted in Human Robots

#437635 Toyota Research Demonstrates ...

Over the last several years, Toyota has been putting more muscle into forward-looking robotics research than just about anyone. In addition to the Toyota Research Institute (TRI), there’s that massive 175-acre robot-powered city of the future that Toyota still plans to build next to Mount Fuji. Even Toyota itself acknowledges that it might be crazy, but that’s just how they roll—as TRI CEO Gill Pratt told me a while back, when Toyota decides to do something, they really do go all-in on it.

TRI has been focusing heavily on home robots, which is reflective of the long-term nature of what TRI is trying to do, because home robots are both the place where we’ll need robots the most at the same time as they’re the place where it’s going to be hardest to deploy them. The unpredictable nature of homes, and the fact that homes tend to have squishy fragile people in them, are robot-unfriendly characteristics, but as the population continues to age (an increasingly acute problem in Japan), homes offer an enormous amount of potential for helping us maintain our independence.

Today, Toyota is showing off some of the research that it’s been working on recently, in the form of a virtual reality presentation in lieu of an in-person press event. For journalists, TRI pre-loaded the recording onto a VR headset, which was FedEx’ed to my house. You can watch the entire 40-minute presentation in 360 video on YouTube (or in VR if you have a headset of your own), but if you don’t watch the whole thing, you should at least check out the full-on GLaDOS (with arms) that TRI thinks belongs in your home.

The presentation features an introduction from Gill Pratt, who looks entirely too comfortable embedded inside of one of TRI’s telepresence robots. The event also covers a lot of territory, but the highlight is almost certainly the new hardware that TRI demonstrates.

Soft bubble gripper

Photo: TRI

This is a “soft bubble gripper,” under development at TRI’s Cambridge, Mass., branch. These passively-compliant, air-filled grippers make it easier to grasp many different kinds of objects safely, but the nifty thing is that they’ve got cameras inside of them watching a pattern of dots on the interior of the soft membrane.

When the outside of the bubble makes contact with an object, the bubble deforms, and the deformation of the dot pattern on the inside can be tracked by the camera to determine both directions and magnitudes of forces. This is a concept that we’ve seen elsewhere before, but TRI’s implementation is a clever way of making an inherently safe end effector that can still perform all the sensing you need it to do for relatively complex manipulation tasks.

The bubble gripper was presented at ICRA this year, and you can read the technical paper here.

Ceiling-mounted home robot

Photo: TRI

I don’t know whether robots dangling from the ceiling was somehow sinister pre-Portal, but it sure as heck is for me having played through that game a couple of times, and it’s since been reinforced by AUTO from WALL-E.

The reason that we generally see robots mounted on the floor or on tables or on mobile bases is that we’re bipeds, not bats, and giving a robot access to a human-like workspace is easiest to do if you also give that robot a human-like position and orientation. And if you want to be able to reach stuff high up, you do what TRI did with their previous generation of kitchen manipulator, and just give it the ability to make itself super tall. But TRI is convinced it’s a good place to put our future home robots:

One innovative concept is a “gantry robot” that would descend from an overhead framework to perform tasks such as loading the dishwasher, wiping surfaces, and clearing clutter. By traveling on the ceiling, the robot avoids the problems of navigating household floor clutter and navigating cramped spaces. When not in use, the robot would tuck itself up out of the way. To further investigate this idea, the team has built a laboratory prototype robot that can do all the same tasks as a floor-based mobile robot but with the innovative overhead mobility system.

Another obvious problem with the gantry robot is that you have to install all kinds of stuff in your ceiling for this to work, which makes it very impractical (if not totally impossible) to introduce a system like this into a home that wasn’t built specifically for it. If, however, you do build a home with a robot like this in mind, the animation below from TRI shows how it could be extra useful. Suddenly, stairs are a non-issue. Payload is presumably also a non-issue, since loads can be transferred to the ceiling. Batteries become unnecessary, so the whole robot can be much lighter weight, which in turn makes it safer. Sensors get a fantastic view, and obstacle avoidance becomes trivial.

Robots as “time machines”

Photo: TRI

TRI’s presentation covered more than what we’ve highlighted here—our focus has been on the hardware prototypes, but TRI had more to talk about, including learning through demonstration, scaling learning through simulation, and how TRI has been working with users to figure out what research directions should be explored. It’s all available right now on YouTube, and it’s well worth 40 minutes of your time.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings”
—Gill Pratt, TRI

It’s only been five years since Toyota announced the $1 billion investment that established TRI, and it feels like the progress that’s been made since then has been substantial. It’s not often that vision, resources, and long-term commitment come together like this, and TRI’s emphasis on making life better for people is one of the things that helps to keep us optimistic about the future of robotics.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings,” Gill Pratt told us. “And what it means to amplify a person, particularly as they’re aging—what we’re really trying to do is build a time machine. This may sound fanciful, and of course we can’t build a real time machine, but maybe we can build robotic assistants to make our lives as we age seem as if we are actually using a time machine.” He explains that it doesn’t mean building robots for convenience or to do our jobs for us. “It means building technology that enables us to continue to live and to work and to relate to each other as if we were younger,” he says. “And that’s really what our main goal is.” Continue reading

Posted in Human Robots

#437624 AI-Powered Drone Learns Extreme ...

Quadrotors are among the most agile and dynamic machines ever created. In the hands of a skilled human pilot, they can do some astonishing series of maneuvers. And while autonomous flying robots have been getting better at flying dynamically in real-world environments, they still haven’t demonstrated the same level of agility of manually piloted ones.

Now researchers from the Robotics and Perception Group at the University of Zurich and ETH Zurich, in collaboration with Intel, have developed a neural network training method that “enables an autonomous quadrotor to fly extreme acrobatic maneuvers with only onboard sensing and computation.” Extreme.

There are two notable things here: First, the quadrotor can do these extreme acrobatics outdoors without any kind of external camera or motion-tracking system to help it out (all sensing and computing is onboard). Second, all of the AI training is done in simulation, without the need for an additional simulation-to-real-world (what researchers call “sim-to-real”) transfer step. Usually, a sim-to-real transfer step means putting your quadrotor into one of those aforementioned external tracking systems, so that it doesn’t completely bork itself while trying to reconcile the differences between the simulated world and the real world, where, as the researchers wrote in a paper describing their system, “even tiny mistakes can result in catastrophic outcomes.”

To enable “zero-shot” sim-to-real transfer, the neural net training in simulation uses an expert controller that knows exactly what’s going on to teach a “student controller” that has much less perfect knowledge. That is, the simulated sensory input that the student ends up using as it learns to follow the expert has been abstracted to present the kind of imperfect, imprecise data it’s going to encounter in the real world. This can involve things like abstracting away the image part of the simulation until you’d have no way of telling the difference between abstracted simulation and abstracted reality, which is what allows the system to make that sim-to-real leap.

The simulation environment that the researchers used was Gazebo, slightly modified to better simulate quadrotor physics. Meanwhile, over in reality, a custom 1.5-kilogram quadrotor with a 4:1 thrust to weight ratio performed the physical experiments, using only a Nvidia Jetson TX2 computing board and an Intel RealSense T265, a dual fisheye camera module optimized for V-SLAM. To challenge the learning system, it was trained to perform three acrobatic maneuvers plus a combo of all of them:

Image: University of Zurich/ETH Zurich/Intel

Reference trajectories for acrobatic maneuvers. Top row, from left: Power Loop, Barrel Roll, and Matty Flip. Bottom row: Combo.

All of these maneuvers require high accelerations of up to 3 g’s and careful control, and the Matty Flip is particularly challenging, at least for humans, because the whole thing is done while the drone is flying backwards. Still, after just a few hours of training in simulation, the drone was totally real-world competent at these tricks, and could even extrapolate a little bit to perform maneuvers that it was not explicitly trained on, like doing multiple loops in a row. Where humans still have the advantage over drones is (as you might expect since we’re talking about robots) is quickly reacting to novel or unexpected situations. And when you’re doing this sort of thing outdoors, novel and unexpected situations are everywhere, from a gust of wind to a jealous bird.

For more details, we spoke with Antonio Loquercio from the University of Zurich’s Robotics and Perception Group.

IEEE Spectrum: Can you explain how the abstraction layer interfaces with the simulated sensors to enable effective sim-to-real transfer?

Antonio Loquercio: The abstraction layer applies a specific function to the raw sensor information. Exactly the same function is applied to the real and simulated sensors. The result of the function, which is “abstracted sensor measurements,” makes simulated and real observation of the same scene similar. For example, suppose we have a sequence of simulated and real images. We can very easily tell apart the real from the simulated ones given the difference in rendering. But if we apply the abstraction function of “feature tracks,” which are point correspondences in time, it becomes very difficult to tell which are the simulated and real feature tracks, since point correspondences are independent of the rendering. This applies for humans as well as for neural networks: Training policies on raw images gives low sim-to-real transfer (since images are too different between domains), while training on the abstracted images has high transfer abilities.

How useful is visual input from a camera like the Intel RealSense T265 for state estimation during such aggressive maneuvers? Would using an event camera substantially improve state estimation?

Our end-to-end controller does not require a state estimation module. It shares however some components with traditional state estimation pipelines, specifically the feature extractor and the inertial measurement unit (IMU) pre-processing and integration function. The input of the neural networks are feature tracks and integrated IMU measurements. When looking at images with low features (for example when the camera points to the sky), the neural net will mainly rely on IMU. When more features are available, the network uses to correct the accumulated drift from IMU. Overall, we noticed that for very short maneuvers IMU measurements were sufficient for the task. However, for longer ones, visual information was necessary to successfully address the IMU drift and complete the maneuver. Indeed, visual information reduces the odds of a crash by up to 30 percent in the longest maneuvers. We definitely think that event camera can improve even more the current approach since they could provide valuable visual information during high speed.

“The Matty Flip is probably one of the maneuvers that our approach can do very well … It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.”
—Antonio Loquercio, University of Zurich

You describe being able to train on “maneuvers that stretch the abilities of even expert human pilots.” What are some examples of acrobatics that your drones might be able to do that most human pilots would not be capable of?

The Matty Flip is probably one of the maneuvers that our approach can do very well, but human pilots find very challenging. It basically entails doing a high speed power loop by always looking backward. It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.

What are the limits to the performance of this system?

At the moment the main limitation is the maneuver duration. We never trained a controller that could perform maneuvers longer than 20 seconds. In the future, we plan to address this limitation and train general controllers which can fly in that agile way for significantly longer with relatively small drift. In this way, we could start being competitive against human pilots in drone racing competitions.

Can you talk about how the techniques developed here could be applied beyond drone acrobatics?

The current approach allows us to do acrobatics and agile flight in free space. We are now working to perform agile flight in cluttered environments, which requires a higher degree of understanding of the surrounding with respect to this project. Drone acrobatics is of course only an example application. We selected it because it makes a stress test of the controller performance. However, several other applications which require fast and agile flight can benefit from our approach. Examples are delivery (we want our Amazon packets always faster, don’t we?), search and rescue, or inspection. Going faster allows us to cover more space in less time, saving battery costs. Indeed, agile flight has very similar battery consumption of slow hovering for an autonomous drone.

“Deep Drone Acrobatics,” by Elia Kaufmann, Antonio Loquercio, René Ranftl, Matthias Müller, Vladlen Koltun, and Davide Scaramuzza from the Robotics and Perception Group at the University of Zurich and ETH Zurich, and Intel’s Intelligent Systems Lab, was presented at RSS 2020. Continue reading

Posted in Human Robots