Tag Archives: Deep learning

#437905 New Deep Learning Method Helps Robots ...

One of the biggest things standing in the way of the robot revolution is their inability to adapt. That may be about to change though, thanks to a new approach that blends pre-learned skills on the fly to tackle new challenges.

Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.

The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.

Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.

But the approach requires huge amounts of data to solve even simple tasks. And most of the things we would want a robot to do are actually comprised of many smaller tasks—for instance, delivering a parcel involves learning how to pick an object up, how to walk, how to navigate, and how to pass an object to someone else, among other things.

Training all these sub-tasks simultaneously is hugely complex and far beyond the capabilities of most current AI systems, so many experiments so far have focused on narrow skills. Some have tried to train AI on multiple skills separately and then use an overarching system to flip between these expert sub-systems, but these approaches still can’t adapt to completely new challenges.

Building off this research, though, scientists have now created a new AI system that can blend together expert sub-systems specialized for a specific task. In a paper in Science Robotics, they explain how this allows a four-legged robot to improvise new skills and adapt to unfamiliar challenges in real time.

The technique, dubbed multi-expert learning architecture (MELA), relies on a two-stage training approach. First the researchers used a computer simulation to train two neural networks to carry out two separate tasks: trotting and recovering from a fall.

They then used the models these two networks learned as seeds for eight other neural networks specialized for more specific motor skills, like rolling over or turning left or right. The eight “expert networks” were trained simultaneously along with a “gating network,” which learns how to combine these experts to solve challenges.

Because the gating network synthesizes the expert networks rather than switching them on sequentially, MELA is able to come up with blends of different experts that allow it to tackle problems none could solve alone.

The authors liken the approach to training people in how to play soccer. You start out by getting them to do drills on individual skills like dribbling, passing, or shooting. Once they’ve mastered those, they can then intelligently combine them to deal with more dynamic situations in a real game.

After training the algorithm in simulation, the researchers uploaded it to a four-legged robot and subjected it to a battery of tests, both indoors and outdoors. The robot was able to adapt quickly to tricky surfaces like gravel or pebbles, and could quickly recover from being repeatedly pushed over before continuing on its way.

There’s still some way to go before the approach could be adapted for real-world commercially useful robots. For a start, MELA currently isn’t able to integrate visual perception or a sense of touch; it simply relies on feedback from the robot’s joints to tell it what’s going on around it. The more tasks you ask the robot to master, the more complex and time-consuming the training will get.

Nonetheless, the new approach points towards a promising way to make multi-skilled robots become more than the sum of their parts. As much fun as it is, it seems like laughing at compilations of clumsy robots may soon be a thing of the past.

Image Credit: Yang et al., Science Robotics Continue reading

Posted in Human Robots

#437892 This Week’s Awesome Tech Stories From ...

ENVIRONMENT
Human-Made Stuff Now Outweighs All Life on Earth
Stephanie Pappas | Scientific American
“Humanity has reached a new milestone in its dominance of the planet: human-made objects may now outweigh all of the living beings on Earth. Roads, houses, shopping malls, fishing vessels, printer paper, coffee mugs, smartphones and all the other infrastructure of daily life now weigh in at approximately 1.1 trillion metric tons—equal to the combined dry weight of all plants, animals, fungi, bacteria, archaea and protists on the planet.”

SPACE
So, It Turns Out SpaceX Is Pretty Good at Rocketing
Eric Berger | Ars Technica
“As the Sun sank toward the South Texas horizon, a fantastical-looking spaceship rose into the reddening sky. It was, in a word, epic. …This was one heck of a test-flight that addressed a number of unknowns about Starship, which is the upper stage of SpaceX’s new launch system and may one day land humans on the Moon, Mars, and beyond.”

ARTIFICIAL INTELLIGENCE
Tiny Four-Bit Computers Are All You Need to Train AI
Karen Hao | MIT Technology Review
“The work…could increase the speed and cut the energy costs needed to train deep learning by more than sevenfold. It could also make training powerful AI models possible on smartphones and other small devices, which would improve privacy by helping to keep personal data on a local device. And it would make the process more accessible to researchers outside big, resource-rich tech companies.”

ENERGY
Did Quantum Scape Just Solve a 40-Year-Old Battery Problem?
Daniel Oberhaus | Wired
“[The properties of solid state batteries] would send…energy density through the roof, enable ultra-fast charging, and would eliminate the risk of battery fires. But for the past 40 years, no one has been able to make a solid-state battery that delivers on this promise—until earlier this year, when a secretive startup called QuantumScape claimed to have solved the problem. Now it has the data to prove it.”

ROBOTICS
Hyundai Buys Boston Dynamics for Nearly $1 Billion. Now What?
Evan Ackerman | IEEE Spectrum
“I hope that Boston Dynamics is unique enough that the kinds of rules that normally apply to robotics companies (or companies in general) can be set aside, at least somewhat, but I also worry that what made Boston Dynamics great was the explicit funding for the kinds of radical ideas that eventually resulted in robots like Atlas and Spot. Can Hyundai continue giving Boston Dynamics the support and freedom that they need to keep doing the kinds of things that have made them legendary? I certainly hope so.”

BIOTECH
CRISPR and Another Genetic Strategy Fix Cell Defects in Two Common Blood Disorders
Jocelyn Kaiser | Science
“It is a double milestone: new evidence that cures are possible for many people born with sickle cell disease and another serious blood disorder, beta-thalassemia, and a first for the genome editor CRISPR. Today, in The New England Journal of Medicine (NEJM) and tomorrow at the American Society of Hematology (ASH) meeting, teams report that two strategies for directly fixing malfunctioning blood cells have dramatically improved the health of a handful of people with these genetic diseases.”

ETHICS
The Dark Side of Big Tech’s Funding for AI Research
Tom Simonite | Wired
“Timnit Gebru’s exit from Google is a powerful reminder of how thoroughly companies dominate the field, with the biggest computers and the most resources. …[Meredith] Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs. ‘That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology.’i”

Image credit: Karsten Winegeart / Unsplash Continue reading

Posted in Human Robots

#437872 AlphaFold Proves That AI Can Crack ...

Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy.

AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.

Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein's biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thornton, director emeritus of the European Bioinformatics Institute. It also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.

Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.

As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.

How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things.

“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right sort of output that you can translate back into the real world.”

DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”

AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.

For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves.

The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures.

“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”

Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for.

Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.

“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”

Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.” Continue reading

Posted in Human Robots

#437809 Q&A: The Masterminds Behind ...

Illustration: iStockphoto

Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.

The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.

Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.

Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.

This interview has been condensed and edited for clarity.

IEEE Spectrum: How does AI handle the various parts of the self-driving problem?

Photo: Toyota

Gill Pratt

Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.

The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.

Spectrum: Can you offset the weakness in prediction with stupendous perception?

Photo: Toyota Research Institute for Burgard

Wolfram Burgard

Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.

With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.

Spectrum: When do deep learning’s limitations become apparent?

Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.

Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.

“I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur?”
—Gill Pratt, Toyota Research Institute

For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.

You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.

Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?

Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.

Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?

Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.

Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions.

Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.

Spectrum: So, what’s next—what new technique is in the offing?

Pratt: If I knew the answer, we’d do it. [Laughter]

Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?

Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.

“We are now in the age of deep learning, and we don’t know what will come after.”
—Wolfram Burgard, Toyota Research Institute

That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.

Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?

Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.

Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.

Photo: Toyota

Toyota is using this
Platform 4 automated driving test vehicle, based on the Lexus LS, to develop Level-4 self-driving capabilities for its “Chauffeur” project.

Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!

Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?

These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?

Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?

Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.

Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.

And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.

Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.

Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done. Continue reading

Posted in Human Robots

#437783 Ex-Googler’s Startup Comes Out of ...

Over the last 10 years, the PR2 has helped roboticists make an enormous amount of progress in mobile manipulation over a relatively short time. I mean, it’s been a decade already, but still—robots are hard, and giving a bunch of smart people access to a capable platform where they didn’t have to worry about hardware and could instead focus on doing interesting and useful things helped to establish a precedent for robotics research going forward.

Unfortunately, not everyone can afford an enormous US $400,000 robot, and even if they could, PR2s are getting very close to the end of their lives. There are other mobile manipulators out there taking the place of the PR2, but so far, size and cost have largely restricted them to research labs. Lots of good research is being done, but it’s getting to the point where folks want to take the next step: making mobile manipulators real-world useful.

Today, a company called Hello Robot is announcing a new mobile manipulator called the Stretch RE1. With offices in the San Francisco Bay Area and in Atlanta, Ga., Hello Robot is led by Aaron Edsinger and Charlie Kemp, and by combining decades of experience in industry and academia they’ve managed to come up with a robot that’s small, lightweight, capable, and affordable, all at the same time. For now, it’s a research platform, but eventually, its creators hope that it will be able to come into our homes and take care of us when we need it to.

A fresh look at mobile manipulators
To understand the concept behind Stretch, it’s worth taking a brief look back at what Edsinger and Kemp have been up to for the past 10 years. Edsinger co-founded Meka Robotics in 2007, which built expensive, high performance humanoid arms, torsos, and heads for the research market. Meka was notable for being the first robotics company (as far as we know) to sell robot arms that used series elastic actuators, and the company worked extensively with Georgia Tech researchers. In 2011, Edsinger was one of the co-founders of Redwood Robotics (along with folks from SRI and Willow Garage), which was going to develop some kind of secret and amazing new robot arm before Google swallowed it in late 2013. At the same time, Google also acquired Meka and a bunch of other robotics companies, and Edsinger ended up at Google as one of the directors of its robotics program, until he left to co-found Hello Robot in 2017.

Meanwhile, since 2007 Kemp has been a robotics professor at Georgia Tech, where he runs the Healthcare Robotics Lab. Kemp’s lab was one of the 11 PR2 beta sites, giving him early experience with a ginormous mobile manipulator. Much of the research that Kemp has spent the last decade on involves robots providing assistance to untrained users, often through direct physical contact, and frequently either in their own homes or in a home environment. We should mention that the Georgia Tech PR2 is still going, most recently doing some clever material classification work in a paper for IROS later this year.

Photo: Hello Robot

Hello Robot co-founder and CEO Aaron Edsinger says that, although Stretch is currently a research platform, he hopes to see the robot deployed in home environments, adding that the “impact we want to have is through robots that are helpful to people in society.”

So with all that in mind, where’d Hello Robot come from? As it turns out, both Edsinger and Kemp were in Rodney Brooks’ group at MIT, so it’s perhaps not surprising that they share some of the same philosophies about what robots should be and what they should be used for. After collaborating on a variety of projects over the years, in 2017 Edsinger was thinking about his next step after Google when Kemp stopped by to show off some video of a new robot prototype that he’d been working on—the prototype for Stretch. “As soon as I saw it, I knew that was exactly the kind of thing I wanted to be working on,” Edsinger told us. “I’d become frustrated with the complexity of the robots being built to do manipulation in home environments and around people, and it solved a lot of problems in an elegant way.”

For Kemp, Stretch is an attempt to get everything he’s been teaching his robots out of his lab at Georgia Tech and into the world where it can actually be helpful to people. “Right from the beginning, we were trying to take our robots out to real homes and interact with real people,” says Kemp. Georgia Tech’s PR2, for example, worked extensively with Henry and Jane Evans, helping Henry (a quadriplegic) regain some of the bodily autonomy he had lost. With the assistance of the PR2, Henry was able to keep himself comfortable for hours without needing a human caregiver to be constantly with him. “I felt like I was making a commitment in some ways to some of the people I was working with,” Kemp told us. “But 10 years later, I was like, where are these things? I found that incredibly frustrating. Stretch is an effort to try to push things forward.”

A robot you can put in the backseat of a car
One way to put Stretch in context is to think of it almost as a reaction to the kitchen sink philosophy of the PR2. Where the PR2 was designed to be all the robot anyone could ever need (plus plenty of robot that nobody really needed) embodied in a piece of hardware that weighs 225 kilograms and cost nearly half a million dollars, Stretch is completely focused on being just the robot that is actually necessary in a form factor that’s both much smaller and affordable. The entire robot weighs a mere 23 kg in a footprint that’s just a 34 cm square. As you can see from the video, it’s small enough (and safe enough) that it can be moved by a child. The cost? At $17,950 apiece—or a bit less if you buy a bunch at once—Stretch costs a fraction of what other mobile manipulators sell for.

It might not seem like size or weight should be that big of an issue, but it very much is, explains Maya Cakmak, a robotics professor at the University of Washington, in Seattle. Cakmak worked with PR2 and Henry Evans when she was at Willow Garage, and currently has access to both a PR2 and a Fetch research robot. “When I think about my long term research vision, I want to deploy service robots in real homes,” Cakmak told us. Unfortunately, it’s the robots themselves that have been preventing her from doing this—both the Fetch and the PR2 are large enough that moving them anywhere requires a truck and a lift, which also limits the home that they can be used in. “For me, I felt immediately that Stretch is very different, and it makes a lot of sense,” she says. “It’s safe and lightweight, you can probably put it in the backseat of a car.” For Cakmak, Stretch’s size is the difference between being able to easily take a robot to the places she wants to do research in, and not. And cost is a factor as well, since a cheaper robot means more access for her students. “I got my refurbished PR2 for $180,000,” Cakmak says. “For that, with Stretch I could have 10!”

“I felt immediately that Stretch is very different. It’s safe and lightweight, you can probably put it in the backseat of a car. I got my refurbished PR2 for $180,000. For that, with Stretch I could have 10!”
—Maya Cakmak, University of Washington

Of course, a portable robot doesn’t do you any good if the robot itself isn’t sophisticated enough to do what you need it to do. Stretch is certainly a compromise in functionality in the interest of small size and low cost, but it’s a compromise that’s been carefully thought out, based on the experience that Edsinger has building robots and the experience that Kemp has operating robots in homes. For example, most mobile manipulators are essentially multi-degrees-of-freedom arms on mobile bases. Stretch instead leverages its wheeled base to move its arm in the horizontal plane, which (most of the time) works just as well as an extra DoF or two on the arm while saving substantially on weight and cost. Similarly, Stretch relies almost entirely on one sensor, an Intel RealSense D435i on a pan-tilt head that gives it a huge range of motion. The RealSense serves as a navigation camera, manipulation camera, a 3D mapping system, and more. It’s not going to be quite as good for a task that might involve fine manipulation, but most of the time it’s totally workable and you’re saving on cost and complexity.

Stretch has been relentlessly optimized to be the absolutely minimum robot to do mobile manipulation in a home or workplace environment. In practice, this meant figuring out exactly what it was absolutely necessary for Stretch to be able to do. With an emphasis on manipulation, that meant defining the workspace of the robot, or what areas it’s able to usefully reach. “That was one thing we really had to push hard on,” says Edsinger. “Reachability.” He explains that reachability and a small mobile base tend not to go together, because robot arms (which tend to weigh a lot) can cause a small base to tip, especially if they’re moving while holding a payload. At the same time, Stretch needed to be able to access both countertops and the floor, while being able to reach out far enough to hand people things without having to be right next to them. To come up with something that could meet all those requirements, Edsinger and Kemp set out to reinvent the robot arm.

Stretch’s key innovation: a stretchable arm
The design they came up with is rather ingenious in its simplicity and how well it works. Edsinger explains that the arm consists of five telescoping links: one fixed and four moving. They are constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole. The strong, lightweight structure allows the arm to extend over half a meter and hold up to 1.5 kg. Although the company has a patent pending for the design, Edsinger declined to say whether the links are driven by a belt, cables, or gears. “We don’t want to disclose too much of the secret sauce [with regard to] the drive mechanism.” He added that the arm was “one of the most significant engineering challenges on the robot in terms of getting the desired reach, compactness, precision, smoothness, force sensitivity, and low cost to all happily coexist.”

Photo: Hello Robot

Stretch’s arm consists of five telescoping links constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole, minimizing weight and inertia. The arm has a reach of over half a meter and can hold up to 1.5 kg.

Another interesting features of Stretch is its interface with the world—its gripper. There are countless different gripper designs out there, each and every one of which is the best at gripping some particular subset of things. But making a generalized gripper for all of the stuff that you’d find in a home is exceptionally difficult. Ideally, you’d want some sort of massive experimental test program where thousands and thousands of people test out different gripper designs in their homes for long periods of time and then tell you which ones work best. Obviously, that’s impractical for a robotics startup, but Kemp realized that someone else was already running the study for him: Amazon.

“I had this idea that there are these assistive grabbers that people with disabilities use to grasp objects in the real world,” he told us. Kemp went on Amazon’s website and looked at the top 10 grabbers and the reviews from thousands of users. He then bought a bunch of different ones and started testing them. “This one [Stretch’s gripper], I almost didn’t order it, it was such a weird looking thing,” he says. “But it had great reviews on Amazon, and oh my gosh, it just blew away the other grabbers. And I was like, that’s it. It just works.”

Stretch’s teleoperated and autonomous capabilities
As with any robot intended to be useful outside of a structured environment, hardware is only part of the story, and arguably not even the most important part. In order for Stretch to be able to operate out from under the supervision of a skilled roboticist, it has to be either easy to control, or autonomous. Ideally, it’s both, and that’s what Hello Robot is working towards, although things didn’t start out that way, Kemp explains. “From a minimalist standpoint, we began with the notion that this would be a teleoperated robot. But in the end, you just don’t get the real power of the robot that way, because you’re tied to a person doing stuff. As much as we fought it, autonomy really is a big part of the future for this kind of system.”

Here’s a look at some of Stretch’s teleoperated capabilities. We’re told that Stretch is very easy to get going right out of the box, although this teleoperation video from Hello Robot looks like it’s got a skilled and experienced user in the loop:

For such a low-cost platform, the autonomy (even at this early stage) is particularly impressive:

Since it’s not entirely clear from the video exactly what’s autonomous, here’s a brief summary of a couple of the more complex behaviors that Kemp sent us:

Object grasping: Stretch uses its 3D camera to find the nearest flat surface using a virtual overhead view. It then segments significant blobs on top of the surface. It selects the largest blob in this virtual overhead view and fits an ellipse to it. It then generates a grasp plan that makes use of the center of the ellipse and the major and minor axes. Once it has a plan, Stretch orients its gripper, moves to the pre-grasp pose, moves to the grasp pose, closes its gripper based on the estimated object width, lifts up, and retracts.
Mapping, navigating, and reaching to a 3D point: These demonstrations all use FUNMAP (Fast Unified Navigation, Manipulation and Planning). It’s all novel custom Python code. Even a single head scan performed by panning the 3D camera around can result in a very nice 3D representation of Stretch’s surroundings that includes the nearby floor. This is surprisingly unusual for robots, which often have their cameras too low to see many interesting things in a human environment. While mapping, Stretch selects where to scan next in a non-trivial way that considers factors such as the quality of previous observations, expected new observations, and navigation distance. The plan that Stretch uses to reach the target 3D point has been optimized for navigation and manipulation. For example, it finds a final robot pose that provides a large manipulation workspace for Stretch, which must consider nearby obstacles, including obstacles on the ground.
Object handover: This is a simple demonstration of object handovers. Stretch performs Cartesian motions to move its gripper to a body-relative position using a good motion heuristic, which is to extend the arm as the last step. These simple motions work well due to the design of Stretch. It still surprises me how well it moves the object to comfortable places near my body, and how unobtrusive it is. The goal point is specified relative to a 3D frame attached to the person’s mouth estimated using deep learning models (shown in the RViz visualization video). Specifically, Stretch targets handoff at a 3D point that is 20 cm below the estimated position of the mouth and 25 cm away along the direction of reaching.

Much of these autonomous capabilities come directly from Kemp’s lab, and the demo code is available for anyone to use. (Hello Robot says all of Stretch’s software is open source.)

Photo: Hello Robot

Hello Robot co-founder and CEO Aaron Edsinger says Stretch is designed to work with people in homes and workplaces and can be teleoperated to do a variety of tasks, including picking up toys, removing laundry from a dryer, and playing games with kids.

As of right now, Stretch is very much a research platform. You’re going to see it in research labs doing research things, and hopefully in homes and commercial spaces as well, but still under the supervision of professional roboticists. As you may have guessed, though, Hello Robot’s vision is a bit broader than that. “The impact we want to have is through robots that are helpful to people in society,” Edsinger says. “We think primarily in the home context, but it could be in healthcare, or in other places. But we really want to have our robots be impactful, and useful. To us, useful is exciting.” Adds Kemp: “I have a personal bias, but we’d really like this technology to benefit older adults and caregivers. Rather than creating a specialized assistive device, we want to eventually create an inexpensive consumer device for everyone that does lots of things.”

Neither Edsinger nor Kemp would say much more on this for now, and they were very explicit about why—they’re being deliberately cautious about raising expectations, having seen what’s happened to some other robotics companies over the past few years. Without VC funding (Hello Robot is currently bootstrapping itself into existence), Stretch is being sold entirely on its own merits. So far, it seems to be working. Stretch robots are already in a half dozen research labs, and we expect that with today’s announcement, we’ll start seeing them much more frequently.

This article appears in the October 2020 print issue as “A Robot That Keeps It Simple.” Continue reading

Posted in Human Robots