Tag Archives: see

#437765 Video Friday: Massive Robot Joins ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

Here are some professional circus artists messing around with an industrial robot for fun, like you do.

The acrobats are part of Östgötateatern, a Swedish theatre group, and the chair bit got turned into its own act, called “The Last Fish.” But apparently the Swedish Work Environment Authority didn’t like that an industrial robot—a large ABB robotic arm—was being used in an artistic performance, arguing that the same safety measures that apply in a factory setting would apply on stage. In other words, the robot had to operate inside a protective cage and humans could not physically interact with it.

When told that their robot had to be removed, the acrobats went to court. And won! At least that’s what we understand from this Swedish press release. The court in Linköping, in southern Sweden, ruled that the safety measures taken by the theater had been sufficient. The group had worked with a local robotics firm, Dyno Robotics, to program the manipulator and learn how to interact with it as safely as possible. The robot—which the acrobats say is the eighth member of their troupe—will now be allowed to return.

[ Östgötateatern ]

Houston Mechathronics’ Aquanaut continues to be awesome, even in the middle of a pandemic. It’s taken the big step (big swim?) out of NASA’s swimming pool and into open water.

[ HMI ]

Researchers from Carnegie Mellon University and Facebook AI Research have created a navigation system for robots powered by common sense. The technique uses machine learning to teach robots how to recognize objects and understand where they’re likely to be found in house. The result allows the machines to search more strategically.

[ CMU ]

Cassie manages 2.1 m/s, which is uncomfortably fast in a couple of different ways.

Next, untethered. After that, running!

[ Michigan Robotics ]

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.

Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).

To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS ’18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called “Global-to-Local Safe Autonomy Synthesis,” or GLAS, which imitates a complete-information planner with only local information, and “Neural-Swarm,” a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.

[ Caltech ]

Fetch Robotics’ Freight robot is now hauling around pulsed xenon UV lamps to autonomously disinfect spaces with UV-A, UV-B, and UV-C, all at the same time.

[ SmartGuard UV ]

When you’re a vertically symmetrical quadruped robot, there is no upside-down.

[ Ghost Robotics ]

In the virtual world, the objects you pick up do not exist: you can see that cup or pen, but it does not feel like you’re touching them. That presented a challenge to EPFL professor Herbert Shea. Drawing on his extensive experience with silicone-based muscles and motors, Shea wanted to find a way to make virtual objects feel real. “With my team, we’ve created very small, thin and fast actuators,” explains Shea. “They are millimeter-sized capsules that use electrostatic energy to inflate and deflate.” The capsules have an outer insulating membrane made of silicone enclosing an inner pocket filled with oil. Each bubble is surrounded by four electrodes, that can close like a zipper. When a voltage is applied, the electrodes are pulled together, causing the center of the capsule to swell like a blister. It is an ingenious system because the capsules, known as HAXELs, can move not only up and down, but also side to side and around in a circle. “When they are placed under your fingers, it feels as though you are touching a range of different objects,” says Shea.

[ EPFL ]

Through the simple trick of reversing motors on impact, a quadrotor can land much more reliably on slopes.

[ Sherbrooke ]

Turtlebot delivers candy at Harvard.

I <3 Turtlebot SO MUCH

[ Harvard ]

Traditional drone controllers are a little bit counterintuitive, because there’s one stick that’s forwards and backwards and another stick that’s up and down but they’re both moving on the same axis. How does that make sense?! Here’s a remote that gives you actual z-axis control instead.

[ Fenics ]

Thanks Ashley!

Lio is a mobile robot platform with a multifunctional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis.

[ F&P Robotics ]

Video shows a ground vehicle autonomously exploring and mapping a multi-storage garage building and a connected patio on Carnegie Mellon University campus. The vehicle runs onboard state estimation and mapping leveraging range, vision, and inertial sensing, local planning for collision avoidance, and terrain analysis. All processing is real-time and no post-processing involved. The vehicle drives at 2m/s through the exploration run. This work is dedicated to DARPA Subterranean Challange.

[ CMU ]

Raytheon UK’s flagship STEM programme, the Quadcopter Challenge, gives 14-15 year olds the chance to participate in a hands-on, STEM-based engineering challenge to build a fully operational quadcopter. Each team is provided with an identical kit of parts, tools and instructions to build and customise their quadcopter, whilst Raytheon UK STEM Ambassadors provide mentoring, technical support and deliver bite-size learning modules to support the build.

[ Raytheon ]

A video on some of the research work that is being carried out at The Australian Centre for Field Robotics, University of Sydney.

[ University of Sydney ]

Jeannette Bohg, assistant professor of computer science at Stanford University, gave one of the Early Career Award Keynotes at RSS 2020.

[ RSS 2020 ]

Adam Savage remembers Grant Imahara.

[ Tested ] Continue reading

Posted in Human Robots

#437728 A Battery That’s Tough Enough To ...

Batteries can add considerable mass to any design, and they have to be supported using a sufficiently strong structure, which can add significant mass of its own. Now researchers at the University of Michigan have designed a structural zinc-air battery, one that integrates directly into the machine that it powers and serves as a load-bearing part.

That feature saves weight and thus increases effective storage capacity, adding to the already hefty energy density of the zinc-air chemistry. And the very elements that make the battery physically strong help contain the chemistry’s longstanding tendency to degrade over many hundreds of charge-discharge cycles.

The research is being published today in Science Robotics.

Nicholas Kotov, a professor of chemical engineer, is the leader of the project. He would not say how many watt-hours his prototype stores per gram, but he did note that zinc air—because it draw on ambient air for its electricity-producing reactions—is inherently about three times as energy-dense as lithium-ion cells. And, because using the battery as a structural part means dispensing with an interior battery pack, you could free up perhaps 20 percent of a machine’s interior. Along with other factors the new battery could in principle provide as much as 72 times the energy per unit of volume (not of mass) as today’s lithium-ion workhorses.

Illustration: Alice Kitterman/Science Robotics

“It’s not as if we invented something that was there before us,” Kotov says. ”I look in the mirror and I see my layer of fat—that’s for the storage of energy, but it also serves other purposes,” like keeping you warm in the wintertime. (A similar advance occurred in rocketry when designers learned how to make some liquid propellant tanks load bearing, eliminating the mass penalty of having separate external hull and internal tank walls.)

Others have spoken of putting batteries, including the lithium-ion kind, into load-bearing parts in vehicles. Ford, BMW, and Airbus, for instance, have expressed interest in the idea. The main problem to overcome is the tradeoff in load-bearing batteries between electrochemical performance and mechanical strength.

Image: Kotov Lab/University of Michigan

Key to the battery's physical toughness and to its long life cycle is the nanofiber membrane, made of Kevlar.

The Michigan group get both qualities by using a solid electrolyte (which can’t leak under stress) and by covering the electrodes with a membrane whose nanostructure of fibers is derived from Kevlar. That makes the membrane tough enough to suppress the growth of dendrites—branching fibers of metal that tend to form on an electrode with every charge-discharge cycle and which degrade the battery.

The Kevlar need not be purchased new but can be salvaged from discarded body armor. Other manufacturing steps should be easy, too, Kotov says. He has only just begun to talk to potential commercial partners, but he says there’s no reason why his battery couldn’t hit the market in the next three or four years.

Drones and other autonomous robots might be the most logical first application because their range is so severely chained to their battery capacity. Also, because such robots don’t carry people about, they face less of a hurdle from safety regulators leery of a fundamentally new battery type.

“And it’s not just about the big Amazon robots but also very small ones,” Kotov says. “Energy storage is a very significant issue for small and flexible soft robots.”

Here’s a video showing how Kotov’s lab has used batteries to form the “exoskeleton” of robots that scuttle like worms or scorpions. Continue reading

Posted in Human Robots

#437723 Minuscule RoBeetle Turns Liquid Methanol ...

It’s no secret that one of the most significant constraints on robots is power. Most robots need lots of it, and it has to come from somewhere, with that somewhere usually being a battery because there simply aren’t many other good options. Batteries, however, are famous for having poor energy density, and the smaller your robot is, the more of a problem this becomes. And the issue with batteries goes beyond the battery itself, but also carries over into all the other components that it takes to turn the stored energy into useful work, which again is a particular problem for small-scale robots.

In a paper published this week in Science Robotics, researchers from the University of Southern California, in Los Angeles, demonstrate RoBeetle, an 88-milligram four legged robot that runs entirely on methanol, a power-dense liquid fuel. Without any electronics at all, it uses an exceptionally clever bit of mechanical autonomy to convert methanol vapor directly into forward motion, one millimeter-long step at a time.

It’s not entirely clear from the video how the robot actually works, so let’s go through how it’s put together, and then look at the actuation cycle.

Image: Science Robotics

RoBeetle (A) uses a methanol-based actuation mechanism (B). The robot’s body (C) includes the fuel tank subassembly (D), a tank lid, transmission, and sliding shutter (E), bottom side of the sliding shutter (F), nickel-titanium-platinum composite wire and leaf spring (G), and front legs and hind legs with bioinspired backward-oriented claws (H).

The body of RoBeetle is a boxy fuel tank that you can fill with methanol by poking a syringe through a fuel inlet hole. It’s a quadruped, more or less, with fixed hind legs and two front legs attached to a single transmission that moves them both at once in a sort of rocking forward and up followed by backward and down motion. The transmission is hooked up to a leaf spring that’s tensioned to always pull the legs backward, such that when the robot isn’t being actuated, the spring and transmission keep its front legs more or less vertical and allow the robot to stand. Those horns are primarily there to hold the leaf spring in place, but they’ve got little hooks that can carry stuff, too.

The actuator itself is a nickel-titanium (NiTi) shape-memory alloy (SMA), which is just a wire that gets longer when it heats up and then shrinks back down when it cools. SMAs are fairly common and used for all kinds of things, but what makes this particular SMA a little different is that it’s been messily coated with platinum. The “messily” part is important for a reason that we’ll get to in just a second.

The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

One end of the SMA wire is attached to the middle of the leaf spring, while the other end runs above the back of the robot where it’s stapled to an anchor block on the robot’s rear end. With the SMA wire hooked up but not actuated (i.e., cold rather than warm), it’s short enough that the leaf spring gets pulled back, rocking the legs forward and up. The last component is embedded in the robot’s back, right along the spine and directly underneath the SMA actuator. It’s a sliding vent attached to the transmission, so that the vent is open when the SMA wire is cold and the leaf spring is pulled back, and closed when the SMA wire is warm and the leaf spring is relaxed. The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

The actuation cycle that causes the robot to walk begins with a full fuel tank and a cold SMA wire. There’s tension on the leaf spring, pulling the transmission back and rocking the legs forward and upward. The transmission also pulls the sliding vent into the open position, allowing methanol vapor to escape up out of the fuel tank and into the air, where it wafts past the SMA wire that runs directly above the vent.

The platinum facilitates a reaction of the methanol (CH3OH) with oxygen in the air (combustion, although not the dramatic flaming and explosive kind) to generate a couple of water molecules and some carbon dioxide plus a bunch of heat, and this is where the messy platinum coating is important, because messy means lots of surface area for the platinum to interact with as much methanol as possible. In just a second or two the temperature of the SMA wire skyrockets from 50 to 100 ºC and it expands, allowing the leaf spring about 0.1 mm of slack. As the leaf spring relaxes, the transmission moves the legs backwards and downwards, and the robot pulls itself forward about 1.2 mm. At the same time, the transmission is closing off the sliding vent, cutting off the supply of methanol vapor. Without the vapor reacting with the platinum and generating heat, in about a second and a half, the SMA wire cools down. As it does, it shrinks, pulling on the leaf spring and starting the cycle over again. Top speed is 0.76 mm/s (0.05 body-lengths per second).

An interesting environmental effect is that the speed of the robot can be enhanced by a gentle breeze. This is because air moving over the SMA wire cools it down a bit faster while also blowing away any residual methanol from around the vents, shutting down the reaction more completely. RoBeetle can carry more than its own body weight in fuel, and it takes approximately 155 minutes for a full tank of methanol to completely evaporate. It’s worth noting that despite the very high energy density of methanol, this is actually a stupendously inefficient way of powering a robot, with an estimated end-to-end efficiency of just 0.48 percent. Not 48 percent, mind you, but 0.48 percent, while in general, powering SMAs with electricity is much more efficient.

However, you have to look at the entire system that would be necessary to deliver that electricity, and for a robot as small as RoBeetle, the researchers say that it’s basically impossible. The lightest commercially available battery and power supply that would deliver enough juice to heat up an SMA actuator weighs about 800 mg, nearly 10 times the total weight of RoBeetle itself. From that perspective, RoBeetle’s efficiency is actually pretty good.

Image: A. Kitterman/Science Robotics; adapted from R.L.T./MIT

Comparison of various untethered microrobots and bioinspired soft robots that use different power and actuation strategies.

There are some other downsides to RoBeetle we should mention—it can only move forwards, not backwards, and it can’t steer. Its speed isn’t adjustable, and once it starts walking, it’ll walk until it either breaks or runs out of fuel. The researchers have some ideas about the speed, at least, pointing out that increasing the speed of fuel delivery by using pressurized liquid fuels like butane or propane would increase the actuator output frequency. And the frequency, amplitude, and efficiency of the SMAs themselves can be massively increased “by arranging multiple fiber-like thin artificial muscles in hierarchical configurations similar to those observed in sarcomere-based animal muscle,” making RoBeetle even more beetle-like.

As for sensing, RoBeetle’s 230-mg payload is enough to carry passive sensors, but getting those sensors to usefully interact with the robot itself to enable any kind of autonomy remains a challenge. Mechanically intelligence is certainly possible, though, and we can imagine RoBeetle adopting some of the same sorts of systems that have been proposed for the clockwork rover that JPL wants to use for Venus exploration. The researchers also mention how RoBeetle could potentially serve as a model for microbots capable of aerial locomotion, which is something we’d very much like to see.

“An 88-milligram insect-scale autonomous crawling robot driven by a catalytic artificial muscle,” by Xiufeng Yang, Longlong Chang, and Néstor O. Pérez-Arancibia from University of Southern California, in Los Angeles, was published in Science Robotics. Continue reading

Posted in Human Robots

#437721 Video Friday: Child Robot Learning to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

We first met Ibuki, Hiroshi Ishiguro’s latest humanoid robot, a couple of years ago. A recent video shows how Ishiguro and his team are teaching the robot to express its emotional state through gait and body posture while moving.

This paper presents a subjective evaluation of the emotions of a wheeled mobile humanoid robot expressing emotions during movement by replicating human gait-induced upper body motion. For this purpose, we proposed the robot equipped with a vertical oscillation mechanism that generates such motion by focusing on human center-of-mass trajectory. In the experiment, participants watched videos of the robot’s different emotional gait-induced upper body motions, and assess the type of emotion shown, and their confidence level in their answer.

[ Hiroshi Ishiguro Lab ] via [ RobotStart ]

ICYMI: This is a zinc-air battery made partly of Kevlar that can be used to support weight, not just add to it.

Like biological fat reserves store energy in animals, a new rechargeable zinc battery integrates into the structure of a robot to provide much more energy, a team led by the University of Michigan has shown.

The new battery works by passing hydroxide ions between a zinc electrode and the air side through an electrolyte membrane. That membrane is partly a network of aramid nanofibers—the carbon-based fibers found in Kevlar vests—and a new water-based polymer gel. The gel helps shuttle the hydroxide ions between the electrodes. Made with cheap, abundant and largely nontoxic materials, the battery is more environmentally friendly than those currently in use. The gel and aramid nanofibers will not catch fire if the battery is damaged, unlike the flammable electrolyte in lithium ion batteries. The aramid nanofibers could be upcycled from retired body armor.

[ University of Michigan ]

In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.

[ CMU ]

Captured on Aug. 11 during the second rehearsal of the OSIRIS-REx mission’s sample collection event, this series of images shows the SamCam imager’s field of view as the NASA spacecraft approaches asteroid Bennu’s surface. The rehearsal brought the spacecraft through the first three maneuvers of the sampling sequence to a point approximately 131 feet (40 meters) above the surface, after which the spacecraft performed a back-away burn.

These images were captured over a 13.5-minute period. The imaging sequence begins at approximately 420 feet (128 meters) above the surface – before the spacecraft executes the “Checkpoint” maneuver – and runs through to the “Matchpoint” maneuver, with the last image taken approximately 144 feet (44 meters) above the surface of Bennu.

[ NASA ]

The DARPA AlphaDogfight Trials Final Event took place yesterday; the livestream is like 5 hours long, but you can skip ahead to 4:39 ish to see the AI winner take on a human F-16 pilot in simulation.

Some things to keep in mind about the result: The AI had perfect situational knowledge while the human pilot had to use eyeballs, and in particular, the AI did very well at lining up its (virtual) gun with the human during fast passing maneuvers, which is the sort of thing that autonomous systems excel at but is not necessarily reflective of better strategy.

[ DARPA ]

Coming soon from Clearpath Robotics!

[ Clearpath ]

This video introduces Preferred Networks’ Hand type A, a tendon-driven robot gripper with passively switchable underactuated surface.

[ Preferred Networks ]

CYBATHLON 2020 will take place on 13 – 14 November 2020 – at the teams’ home bases. They will set up their infrastructure for the competition and film their races. Instead of starting directly next to each other, the pilots will start individually and under the supervision of CYBATHLON officials. From Zurich, the competitions will be broadcast through a new platform in a unique live programme.

[ Cybathlon ]

In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.

[ UZH ]

With the help of the software pitasc from Fraunhofer IPA, an assembly task is no longer programmed point by point, but workpiece-related. Thus, pitasc adapts the assembly process itself for new product variants with the help of updated parameters.

[ Fraunhofer ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale ]

This work presents a novel loco-manipulation control framework for the execution of complex tasks with kinodynamic constraints using mobile manipulators. As a representative example, we consider the handling and re-positioning of pallet jacks in unstructured environments. While these results reveal with a proof-of- concept the effectiveness of the proposed framework, they also demonstrate the high potential of mobile manipulators for relieving human workers from such repetitive and labor intensive tasks. We believe that this extended functionality can contribute to increasing the usability of mobile manipulators in different application scenarios.

[ Paper ] via [ IIT ]

I don’t know why this dinosaur ice cream serving robot needs to blow smoke out of its nose, but I like it.

[ Connected Robotics ] via [ RobotStart ]

Guardian S remote visual inspection and surveillance robots make laying cable runs in confined or hard to reach spaces easy. With advanced maneuverability and the ability to climb vertical, ferrous surfaces, the robot reaches areas that are not always easily accessible.

[ Sarcos ]

Looks like the company that bought Anki is working on an add-on to let cars charge while they drive.

[ Digital Dream Labs ]

Chris Atkeson gives a brief talk for the CMU Robotics Institute orientation.

[ CMU RI ]

A UofT Robotics Seminar, featuring Russ Tedrake from MIT and TRI on “Feedback Control for Manipulation.”

Control theory has an answer for just about everything, but seems to fall short when it comes to closing a feedback loop using a camera, dealing with the dynamics of contact, and reasoning about robustness over the distribution of tasks one might find in the kitchen. Recent examples from RL and imitation learning demonstrate great promise, but don’t leverage the rigorous tools from systems theory. I’d like to discuss why, and describe some recent results of closing feedback loops from pixels for “category-level” robot manipulation.

[ UofT ] Continue reading

Posted in Human Robots

#437709 iRobot Announces Major Software Update, ...

Since the release of the very first Roomba in 2002, iRobot’s long-term goal has been to deliver cleaner floors in a way that’s effortless and invisible. Which sounds pretty great, right? And arguably, iRobot has managed to do exactly this, with its most recent generation of robot vacuums that make their own maps and empty their own dustbins. For those of us who trust our robots, this is awesome, but iRobot has gradually been realizing that many Roomba users either don’t want this level of autonomy, or aren’t ready for it.

Today, iRobot is announcing a major new update to its app that represents a significant shift of its overall approach to home robot autonomy. Humans are being brought back into the loop through software that tries to learn when, where, and how you clean so that your Roomba can adapt itself to your life rather than the other way around.

To understand why this is such a shift for iRobot, let’s take a very brief look back at how the Roomba interface has evolved over the last couple of decades. The first generation of Roomba had three buttons on it that allowed (or required) the user to select whether the room being vacuumed was small or medium or large in size. iRobot ditched that system one generation later, replacing the room size buttons with one single “clean” button. Programmable scheduling meant that users no longer needed to push any buttons at all, and with Roombas able to find their way back to their docking stations, all you needed to do was empty the dustbin. And with the most recent few generations (the S and i series), the dustbin emptying is also done for you, reducing direct interaction with the robot to once a month or less.

Image: iRobot

iRobot CEO Colin Angle believes that working toward more intelligent human-robot collaboration is “the brave new frontier” of AI. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” he says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The point that the top-end Roombas are at now reflects a goal that iRobot has been working toward since 2002: With autonomy, scheduling, and the clean base to empty the bin, you can set up your Roomba to vacuum when you’re not home, giving you cleaner floors every single day without you even being aware that the Roomba is hard at work while you’re out. It’s not just hands-off, it’s brain-off. No noise, no fuss, just things being cleaner thanks to the efforts of a robot that does its best to be invisible to you. Personally, I’ve been completely sold on this idea for home robots, and iRobot CEO Colin Angle was as well.

“I probably told you that the perfect Roomba is the Roomba that you never see, you never touch, you just come home everyday and it’s done the right thing,” Angle told us. “But customers don’t want that—they want to be able to control what the robot does. We started to hear this a couple years ago, and it took a while before it sunk in, but it made sense.”

How? Angle compares it to having a human come into your house to clean, but you weren’t allowed to tell them where or when to do their job. Maybe after a while, you’ll build up the amount of trust necessary for that to work, but in the short term, it would likely be frustrating. And people get frustrated with their Roombas for this reason. “The desire to have more control over what the robot does kept coming up, and for me, it required a pretty big shift in my view of what intelligence we were trying to build. Autonomy is not intelligence. We need to do something more.”

That something more, Angle says, is a partnership as opposed to autonomy. It’s an acknowledgement that not everyone has the same level of trust in robots as the people who build them. It’s an understanding that people want to have a feeling of control over their homes, that they have set up the way that they want, and that they’ve been cleaning the way that they want, and a robot shouldn’t just come in and do its own thing.

This change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware.

“Until the robot proves that it knows enough about your home and about the way that you want your home cleaned,” Angle says, “you can’t move forward.” He adds that this is one of those things that seem obvious in retrospect, but even if they’d wanted to address the issue before, they didn’t have the technology to solve the problem. Now they do. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” Angle says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

Where to Clean
Knowing where to clean depends on your Roomba having a detailed and accurate map of its environment. For several generations now, Roombas have been using visual mapping and localization (VSLAM) to build persistent maps of your home. These maps have been used to tell the Roomba to clean in specific rooms, but that’s about it. With the new update, Roombas with cameras will be able to recognize some objects and features in your home, including chairs, tables, couches, and even countertops. The robots will use these features to identify where messes tend to happen so that they can focus on those areas—like around the dining room table or along the front of the couch.

We should take a minute here to clarify how the Roomba is using its camera. The original (primary?) purpose of the camera was for VSLAM, where the robot would take photos of your home, downsample them into QR-code-like patterns of light and dark, and then use those (with the assistance of other sensors) to navigate. Now the camera is also being used to take pictures of other stuff around your house to make that map more useful.

Photo: iRobot

The robots will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table.

This is done through machine learning using a library of images of common household objects from a floor perspective that iRobot had to develop from scratch. Angle clarified for us that this is all done via a neural net that runs on the robot, and that “no recognizable images are ever stored on the robot or kept, and no images ever leave the robot.” Worst case, if all the data iRobot has about your home gets somehow stolen, the hacker would only know that (for example) your dining room has a table in it and the approximate size and location of that table, because the map iRobot has of your place only stores symbolic representations rather than images.

Another useful new feature is intended to help manage the “evil Roomba places” (as Angle puts it) that every home has that cause Roombas to get stuck. If the place is evil enough that Roomba has to call you for help because it gave up completely, Roomba will now remember, and suggest that either you make some changes or that it stops cleaning there, which seems reasonable.

When to Clean
It turns out that the primary cause of mission failure for Roombas is not that they get stuck or that they run out of battery—it’s user cancellation, usually because the robot is getting in the way or being noisy when you don’t want it to be. “If you kill a Roomba’s job because it annoys you,” points out Angle, “how is that robot being a good partner? I think it’s an epic fail.” Of course, it’s not the robot’s fault, because Roombas only clean when we tell them to, which Angle says is part of the problem. “People actually aren’t very good at making their own schedules—they tend to oversimplify, and not think through what their schedules are actually about, which leads to lots of [figurative] Roomba death.”

To help you figure out when the robot should actually be cleaning, the new app will look for patterns in when you ask the robot to clean, and then recommend a schedule based on those patterns. That might mean the robot cleans different areas at different times every day of the week. The app will also make scheduling recommendations that are event-based as well, integrated with other smart home devices. Would you prefer the Roomba to clean every time you leave the house? The app can integrate with your security system (or garage door, or any number of other things) and take care of that for you.

More generally, Roomba will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table. The app will also, to some extent, pay attention to the environment and season. It might suggest increasing your vacuuming frequency if pollen counts are especially high, or if it’s pet shedding season and you have a dog. Unfortunately, Roomba isn’t (yet?) capable of recognizing dogs on its own, so the app has to cheat a little bit by asking you some basic questions.

A Smarter App

Image: iRobot

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

The app update, which should be available starting today, is free. The scheduling and recommendations will work on every Roomba model, although for object recognition and anything related to mapping, you’ll need one of the more recent and fancier models with a camera. Future app updates will happen on a more aggressive schedule. Major app releases should happen every six months, with incremental updates happening even more frequently than that.

Angle also told us that overall, this change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware. “It’s not like we’re done doing hardware,” Angle assured us. “But we do think about hardware differently. We view our robots as platforms that have longer life cycles, and each platform will be able to support multiple generations of software. We’ve kind of decoupled robot intelligence from hardware, and that’s a change.”

Angle believes that working toward more intelligent collaboration between humans and robots is “the brave new frontier of artificial intelligence. I expect it to be the frontier for a reasonable amount of time to come,” he adds. “We have a lot of work to do to create the type of easy-to-use experience that consumer robots need.” Continue reading

Posted in Human Robots