Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439153 OTTO Motors’ Biggest AMR Gets ...

Over the last few weeks, we’ve posted several articles about the next generation of warehouse manipulation robots designed to handle the non-stop stream of boxes that provide the foundation for modern ecommerce. But once these robots take boxes out of the back of a trailer or off of a pallet, there are yet more robots ready to autonomously continue the flow through a warehouse or distribution center. One of the beefiest of these autonomous mobile robots is the OTTO 1500, which is called the OTTO 1500 because (you guessed it) it can handle 1500 kg of cargo. Plus another 400kg of cargo, for a total of 1900 kg of cargo. Yeah, I don’t get it either. Anyway, it’s undergone a major update, which is a good excuse for us to ask OTTO CTO Ryan Gariepy some questions about it.

The earlier version, also named OTTO 1500, has over a million hours of real-world operation, which is impressive. Even more impressive is being able to move that much stuff that quickly without being a huge safety hazard in warehouse environments full of unpredictable humans. Although, that might become less of a problem over time, as other robots take over some of the tasks that humans have been doing. OTTO Motors and Clearpath Robotics have an ongoing partnership with Boston Dynamics, and we fully expect to see these AMRs hauling boxes for Stretch in the near future.

For a bit more, we spoke with OTTO CTO Ryan Gariepy via email.

IEEE Spectrum: What are the major differences between today’s OTTO 1500 and the one introduced six years ago, and why did you decide to make those changes?

Ryan Gariepy: Six years isn’t a long shelf life for an industrial product, but it’s a lifetime in the software world. We took the original OTTO 1500 and stripped it down to the chassis and drivetrain, and re-built it with more modern components (embedded controller, state-of-the-art sensors, next-generation lithium batteries, and more). But the biggest difference is in how we’ve integrated our autonomous software and our industrial safety systems. Our systems are safe throughout the entirety of the vehicle dynamics envelope from straight line motion to aggressive turning at speed in tight spaces. It corners at 2m/s and has 60% more throughput. No “simple rectangular” footprints here! On top of this, the entire customization, development, and validation process is done in a way which respects that our integration partners need to be able to take advantage of these capabilities themselves without needing to become experts in vehicle dynamics.

As for “why now,” we’ve always known that an ecosystem of new sensors and controllers was going to emerge as the world caught on to the potential of heavy-load AMRs. We wanted to give the industry some time to settle out—making sure we had reliable and low-cost 3D sensors, for example, or industrial grade fanless computers which can still mount a reasonable GPU, or modular battery systems which are now built-in view of new certifications requirements. And, possibly most importantly, partners who see the promise of the market enough to accommodate our feedback in their product roadmaps.

How has the reception differed from the original introduction of the OTTO 1500 and the new version?
That’s like asking the difference between the public reception to the introduction of the first iPod in 2001 and the first iPhone in 2007. When we introduced our first AMR, very few people had even heard of them, let alone purchased one before. We spent a great deal of time educating the market on the basic functionality of an AMR: What it is and how it works kind of stuff. Today’s buyers are way more sophisticated, experienced, and approach automation from a more strategic perspective. What was once a tactical purchase to plug a hole is now part of a larger automation initiative. And while the next generation of AMRs closely resemble the original models from the outside, the software functionality and integration capabilities are night and day.

What’s the most valuable lesson you’ve learned?

We knew that our customers needed incredible uptime: 365 days, 24/7 for 10 years is the typical expectation. Some of our competitors have AMRs working in facilities where they can go offline for a few minutes or a few hours without any significant repercussions to the workflow. That’s not the case with our customers, where any stoppage at any point means everything shuts down. And, of course, Murphy’s law all but guarantees that it shuts down at 4:00 a.m. on Saturday, Japan Standard Time. So the humbling lesson wasn’t knowing that our customers wanted maintenance service levels with virtually no down time, the humbling part was the degree of difficulty in building out a service organization as rapidly as we rolled out customer deployments. Every customer in a new geography needed a local service infrastructure as well. Finally, service doesn’t mean anything without spare parts availability, which brings with it customs and shipping challenges. And, of course, as a Canadian company, we need to build all of that international service and logistics infrastructure right from the beginning. Fortunately, the groundwork we’d laid with Clearpath Robotics served as a good foundation for this.

How were you able to develop a new product with COVID restrictions in place?

We knew we couldn’t take an entire OTTO 1500 and ship it to every engineer’s home that needed to work on one, so we came up with the next best thing. We call it a ‘wall-bot’ and it’s basically a deconstructed 1500 that our engineers can roll into their garage. We were pleasantly surprised with how effective this was, though it might be the heaviest dev kit in the robot world.

Also don’t forget that much of robotics is software driven. Our software development life cycle had already had a strong focus on Gazebo-based simulation for years due to it being unfeasible to give every in-office developer a multi-ton loaded robot to play with, and we’d already had a redundant VPN setup for the office. Finally, we’ve always been a remote-work-friendly culture ever since we started adopting telepresence robots and default-on videoconferencing in the pre-OTTO days. In retrospect, it seems like the largest area of improvement for us for the future is how quickly we could get people good home office setups while amid a pandemic. Continue reading

Posted in Human Robots

#439151 Biohybrid soft robot with ...

A team of researchers working at Barcelona Institute of Science and Technology has developed a skeletal-muscle-based, biohybrid soft robot that can swim faster than other skeletal-muscle-based biobots. In their paper published in the journal Science Robotics, the group describes building and testing their soft robot. Continue reading

Posted in Human Robots

#439147 Robots Versus Toasters: How The Power of ...

Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives.

The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!

When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.

The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.

Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.

If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.

We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant.

Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.

iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.

Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.

Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.

As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! Continue reading

Posted in Human Robots

#439142 Scientists Grew Human Cells in Monkey ...

Few things in science freak people out more than human-animal hybrids. Named chimeras, after the mythical Greek creature that’s an amalgam of different beasts, these part-human, part-animal embryos have come onto the scene to transform our understanding of what makes us “human.”

If theoretically grown to term, chimeras would be an endless resource for replacement human organs. They’re a window into the very early stages of human development, allowing scientists to probe the mystery of the first dozen days after sperm-meets-egg. They could help map out how our brains build their early architecture, potentially solving the age-old question of why our neural networks are so powerful—and how their wiring could go wrong.

The trouble with all of this? The embryos are part human. The idea of human hearts or livers growing inside an animal may be icky, but tolerable, to some. Human neurons crafting a brain inside a hybrid embryo—potentially leading to consciousness—is a horror scenario. For years, scientists have flirted with ethical boundaries by mixing human cells with those of rats and pigs, which are relatively far from us in evolutionary terms, to reduce the chance of a mentally “humanized” chimera.

This week, scientists crossed a line.

In a study led by Dr. Juan Carlos Izpisua Belmonte, a prominent stem cell biologist at the Salk Institute for Biological Studies, the team reported the first vetted case of a human-monkey hybrid embryo.

Reflexive shudder aside, the study is a technological tour-de-force. The scientists were able to watch the hybrid embryo develop for 20 days outside the womb, far longer than any previous attempts. Putting the timeline into context, it’s about 20 percent of a monkey’s gestation period.

Although only 3 out of over 100 attempts survived past that point, the viable embryos contained a shockingly high amount of human cells—about one-third of the entire cell population. If able to further develop, those human contributions could, in theory, substantially form the biological architecture of the body, and perhaps the mind, of a human-monkey fetus.

I can’t stress this enough: the technology isn’t there yet to bring Planet of the Apes to life. Strict regulations also prohibit growing chimera embryos past the first few weeks. It’s telling that Izpisua Belmonte collaborated with Chinese labs, which have far fewer ethical regulations than the US.

But the line’s been crossed, and there’s no going back. Here’s what they did, why they did it, and reasons to justify—or limit—similar tests going forward.

What They Did
The way the team made the human-monkey embryo is similar to previous attempts at half-human chimeras.

Here’s how it goes. They used de-programmed, or “reverted,” human stem cells, called induced pluripotent stem cells (iPSCs). These cells often start from skin cells, and are chemically treated to revert to the stem cell stage, gaining back the superpower to grow into almost any type of cell: heart, lung, brain…you get the idea. The next step is preparing the monkey component, a fertilized and healthy monkey egg that develops for six days in a Petri dish. By this point, the embryo is ready for implantation into the uterus, which kicks off the whole development process.

This is where the chimera jab comes in. Using a tiny needle, the team injected each embryo with 25 human cells, and babied them for another day. “Until recently the experiment would have ended there,” wrote Drs. Hank Greely and Nita Farahany, two prominent bioethicists who wrote an accompanying expert take, but were not involved in the study.

But the team took it way further. Using a biological trick, the embryos attached to the Petri dish as they would to a womb. The human cells survived after the artificial “implantation,” and—surprisingly—tended to physically group together, away from monkey cells.

The weird segregation led the team to further explore why human cells don’t play nice with those of another species. Using a big data approach, the team scouted how genes in human cells talked to their monkey hosts. What’s surprising, the team said, is that adding human cells into the monkey embryos fundamentally changed both. Rather than each behaving as they would have in their normal environment, the two species of cells influenced each other, even when physically separated. The human cells, for example, tweaked the biochemical messengers that monkey cells—and the “goop” surrounding those cells—use to talk to one another.

In other words, in contrast to oil and water, human and monkey cells seemed to communicate and change the other’s biology without needing too much outside whisking. Human iPSCs began to behave more like monkey cells, whereas monkey embryos became slightly more human.

Ok, But Why?
The main reasons the team went for a monkey hybrid, rather than the “safer” pig or rat alternative, was because of our similarities to monkeys. As the authors argue, being genetically “closer” in evolutionary terms makes it easier to form chimeras. In turn, the resulting embryos also make it possible to study early human development and build human tissues and organs for replacement.

“Historically, the generation of human-animal chimeras has suffered from low efficiency,” said Izpisua Belmonte. “Generation of a chimera between human and non-human primate, a species more closely related to humans along the evolutionary timeline than all previously used species, will allow us to gain better insight into whether there are evolutionarily imposed barriers to chimera generation and if there are any means by which we can overcome them.”

A Controversial Future
That argument isn’t convincing to some.

In terms of organ replacement, monkeys are very expensive (and cognitively advanced) donors compared to pigs, the latter of which have been the primary research host for growing human organs. While difficult to genetically engineer to fit human needs, pigs are more socially acceptable as organ “donors”—many of us don’t bat an eye at eating ham or bacon—whereas the concept of extracting humanoid tissue from monkeys is extremely uncomfortable.

A human-monkey hybrid could be especially helpful for studying neurodevelopment, but that directly butts heads with the “human cells in animal brains” problem. Even when such an embryo is not brought to term, it’s hard to imagine anyone who’s ready to study the brain of a potentially viable animal fetus with human cells wired into its neural networks.

There’s also the “sledgehammer” aspect of the study that makes scientists cringe. “Direct transplantation of cells into particular regions, or organs [of an animal], allows researchers to predict where and how the cells might integrate,” said Greely and Farahany. This means they might be able to predict if the injected human cells end up in a “boring” area, like the gallbladder, or a more “sensitive” area, like the brain. But with the current technique, we’re unsure where the human cells could eventually migrate to and grow.

Yet despite the ick factor, human-monkey embryos circumvent the ethical quandaries around using aborted tissue for research. These hybrid embryos may present the closest models to early human development that we can get without dipping into the abortion debate.

In their commentary, Greely and Farahany laid out four main aspects to consider before moving ahead with the controversial field. First and foremost is animal welfare, which is “especially true for non-human primates,” as they’re mentally close to us. There’s also the need for consent from human donors, which form the basis of the injected iPSCs, as some may be uncomfortable with the endeavor itself. Like organ donors, people need to be fully informed.

Third and fourth, public discourse is absolutely needed, as people may strongly disapprove of the idea of mixing human tissue or organs with animals. For now, the human-monkey embryos have a short life. But as technology gets better, and based on previous similar experiments with other chimeras, the next step in this venture is to transplant the embryo into a living animal host’s uterus, which could nurture it to grow further.

For now, that’s a red line for human-monkey embryos, and the technology isn’t there yet. But if the surprise of CRISPR babies has taught us anything, it’s that as a society we need to discourage, yet prepare for, a lone wolf who’s willing to step over the line—that is, bringing a part-human, part-animal embryo to term.

“We must begin to think about that possibility,” said Greely and Farahany. With the study, we know that “those future experiments are now at least plausible.”

Image Credit: A human-monkey chimera embryo, photo by Weizhi Ji, Kunming University of Science and Technology Continue reading

Posted in Human Robots

#439141 Protected: 3 Ways to Utilize Artificial ...

There is no excerpt because this is a protected post.

The post Protected: 3 Ways to Utilize Artificial Intelligence for Vehicles appeared first on TFOT. Continue reading

Posted in Human Robots