Tag Archives: how

#439208 How a slender, snake-like robot could ...

You might call it “zoobotics.” Jessica Burgner-Kahrs, the director of the Continuum Robotics Lab at U of T Mississauga, and her team are building very slender, flexible and extensible robots, a few millimeters in diameter, for use in surgery and industry. Unlike humanoid robots, so-called continuum robots feature a long, limbless body—not unlike a snake's—that allows them to access difficult-to-reach places. Continue reading

Posted in Human Robots

#439200 How Disney Imagineering Crammed a ...

From what I’ve seen of humanoid robotics, there’s a fairly substantial divide between what folks in the research space traditionally call robotics, and something like animatronics, which tends to be much more character-driven.

There’s plenty of technology embodied in animatronic robotics, but usually under some fairly significant constraints—like, they’re not autonomously interactive, or they’re stapled to the floor and tethered for power, things like that. And there are reasons for doing it this way: namely, dynamic untethered humanoid robots are already super hard, so why would anyone stress themselves out even more by trying to make them into an interactive character at the same time? That would be crazy!

At Walt Disney Imagineering, which is apparently full of crazy people, they’ve spent the last three years working on Project Kiwi: a dynamic untethered humanoid robot that’s an interactive character at the same time. We asked them (among other things) just how they managed to stuff all of the stuff they needed to stuff into that costume, and how they expect to enable children (of all ages) to interact with the robot safely.

Project Kiwi is an untethered bipedal humanoid robot that Disney Imagineering designed not just to walk without falling over, but to walk without falling over with some character. At about 0.75 meters tall, Kiwi is a bit bigger than a NAO and a bit smaller than an iCub, and it’s just about completely self-contained, with the tether you see in the video being used for control rather than for power. Kiwi can manage 45 minutes of operating time, which is pretty impressive considering its size and the fact that it incorporates a staggering 50 degrees of freedom, a requirement for lifelike motion.

This version of the robot is just a prototype, and it sounds like there’s plenty to do in terms of hardware optimization to improve efficiency and add sensing and interactivity. The most surprising thing to me is that this is not a stage robot: Disney does plan to have some future version of Kiwi wandering around and interacting directly with park guests, and I’m sure you can imagine how that’s likely to go. Interaction at this level, where there’s a substantial risk of small children tackling your robot with a vicious high-speed hug, could be a uniquely Disney problem for a robot with this level of sophistication. And it’s one of the reasons they needed to build their own robot—when Universal Studios decided to try out a Steampunk Spot, for example, they had to put a fence plus a row of potted plants between it and any potential hugs, because Spot is very much not a hug-safe robot.

So how the heck do you design a humanoid robot from scratch with personality and safe human interaction in mind? We asked Scott LaValley, Project Kiwi lead, who came to Disney Imagineering by way of Boston Dynamics and some of our favorite robots ever (including RHex, PETMAN, and Atlas), to explain how they pulled it off.

IEEE Spectrum: What are some of the constraints of Disney’s use case that meant you had to develop your own platform from the ground up?

Scott LaValley: First and foremost, we had to consider the packaging constraints. Our robot was always intended to serve as a bipedal character platform capable of taking on the role of a variety of our small-size characters. While we can sometimes take artistic liberties, for the most part, the electromechanical design had to fit within a minimal character profile to allow the robot to be fully themed with shells, skin, and costuming. When determining the scope of the project, a high-performance biped that matched our size constraints just did not exist.

Equally important was the ability to move with style and personality, or the “emotion of motion.” To really capture a specific character performance, a robotic platform must be capable of motions that range from fast and expressive to extremely slow and nuanced. In our case, this required developing custom high-speed actuators with the necessary torque density to be packaged into the mechanical structure. Each actuator is also equipped with a mechanical clutch and inline torque sensor to support low-stiffness control for compliant interactions and reduced vibration.

Designing custom hardware also allowed us to include additional joints that are uncommon in humanoid robots. For example, the clavicle and shoulder alone include five degrees of freedom to support a shrug function and an extended configuration space for more natural gestures. We were also able to integrate onboard computing to support interactive behaviors.

What compromises were required to make sure that your robot was not only functional, but also capable of becoming an expressive character?

As mentioned previously, we face serious challenges in terms of packaging and component selection due to the small size and character profile. This has led to a few compromises on the design side. For example, we currently rely on rigid-flex circuit boards to fit our electronics onto the available surface area of our parts without additional cables or connectors. Unfortunately, these boards are harder to design and manufacture than standard rigid boards, increasing complexity, cost, and build time. We might also consider increasing the size of the hip and knee actuators if they no longer needed to fit within a themed costume.

Designing a reliable walking robot is in itself a significant challenge, but adding style and personality to each motion is a new layer of complexity. From a software perspective, we spend a significant amount of time developing motion planning and animation tools that allow animators to author stylized gaits, gestures, and expressions for physical characters. Unfortunately, unlike on-screen characters, we do not have the option to bend the laws of physics and must validate each motion through simulation. As a result, we are currently limited to stylized walking and dancing on mostly flat ground, but we hope to be skipping up stairs in the future!

Of course, there is always more that can be done to better match the performance you would expect from a character. We are excited about some things we have in the pipeline, including a next generation lower body and an improved locomotion planner.

How are you going to make this robot safe for guests to be around?

First let us say, we take safety extremely seriously, and it is a top priority for any Disney experience. Ultimately, we do intend to allow interactions with guests of all ages, but it will take a measured process to get there. Proper safety evaluation is a big part of productizing any Research & Development project, and we plan to conduct playtests with our Imagineers, cast members and guests along the way. Their feedback will help determine exactly what an experience with a robotic character will look like once implemented.

From a design standpoint, we believe that small characters are the safest type of biped for human-robot interaction due to their reduced weight and low center of mass. We are also employing compliant control strategies to ensure that the robot’s actuators are torque-limited and backdrivable. Perception and behavior design may also play a key role, but in the end, we will rely on proper show design to permit a safe level of interaction as the technology evolves.

What do you think other roboticists working on legged systems could learn from Project Kiwi?

We are often inspired by other roboticists working on legged systems ourselves but would be happy to share some lessons learned. Remember that robotics is fundamentally interdisciplinary, and a good team typically consists of a mix of hardware and software engineers in close collaboration. In our experience, however, artists and animators play an equally valuable role in bringing a new vision to life. We often pull in ideas from the character animation and game development world, and while robotic characters are far more constrained than their virtual counterparts, we are solving many of the same problems. Another tip is to leverage motion studies (either through animation, motion capture, and/or simulation tools) early in the design process to generate performance-driven requirements for any new robot.

Now that Project Kiwi has de-stealthed, I hope the Disney Imagineering folks will be able to be a little more open with all of the sweet goo inside of the fuzzy skin of this metaphor that has stopped making sense. Meeting a new humanoid robot is always exciting, and the approach here (with its technical capability combined with an emphasis on character and interaction) is totally unique. And if they need anyone to test Kiwi’s huggability, I volunteer! You know, for science. Continue reading

Posted in Human Robots

#439147 Robots Versus Toasters: How The Power of ...

Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives.

The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!

When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.

The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.

Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.

If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.

We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant.

Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.

iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.

Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.

Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.

As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! Continue reading

Posted in Human Robots

#439110 Robotic Exoskeletons Could One Day Walk ...

Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.

Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.

One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.

Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.

Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.

Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.

According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.

In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”

In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .

Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.

However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots