Tag Archives: it

#439527 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
Photos: EASE

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439327 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Photos: EASE

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#437386 Scary A.I. more intelligent than you

GPT-3 (Generative Pre-trained Transformer 3), is an artificial intelligence language generator that uses deep learning to produce human-like output. The high quality of its text is very difficult to distinguish from a human’s. Many scientists, researchers and engineers (including Stephen … Continue reading

Posted in Human Robots

#439142 Scientists Grew Human Cells in Monkey ...

Few things in science freak people out more than human-animal hybrids. Named chimeras, after the mythical Greek creature that’s an amalgam of different beasts, these part-human, part-animal embryos have come onto the scene to transform our understanding of what makes us “human.”

If theoretically grown to term, chimeras would be an endless resource for replacement human organs. They’re a window into the very early stages of human development, allowing scientists to probe the mystery of the first dozen days after sperm-meets-egg. They could help map out how our brains build their early architecture, potentially solving the age-old question of why our neural networks are so powerful—and how their wiring could go wrong.

The trouble with all of this? The embryos are part human. The idea of human hearts or livers growing inside an animal may be icky, but tolerable, to some. Human neurons crafting a brain inside a hybrid embryo—potentially leading to consciousness—is a horror scenario. For years, scientists have flirted with ethical boundaries by mixing human cells with those of rats and pigs, which are relatively far from us in evolutionary terms, to reduce the chance of a mentally “humanized” chimera.

This week, scientists crossed a line.

In a study led by Dr. Juan Carlos Izpisua Belmonte, a prominent stem cell biologist at the Salk Institute for Biological Studies, the team reported the first vetted case of a human-monkey hybrid embryo.

Reflexive shudder aside, the study is a technological tour-de-force. The scientists were able to watch the hybrid embryo develop for 20 days outside the womb, far longer than any previous attempts. Putting the timeline into context, it’s about 20 percent of a monkey’s gestation period.

Although only 3 out of over 100 attempts survived past that point, the viable embryos contained a shockingly high amount of human cells—about one-third of the entire cell population. If able to further develop, those human contributions could, in theory, substantially form the biological architecture of the body, and perhaps the mind, of a human-monkey fetus.

I can’t stress this enough: the technology isn’t there yet to bring Planet of the Apes to life. Strict regulations also prohibit growing chimera embryos past the first few weeks. It’s telling that Izpisua Belmonte collaborated with Chinese labs, which have far fewer ethical regulations than the US.

But the line’s been crossed, and there’s no going back. Here’s what they did, why they did it, and reasons to justify—or limit—similar tests going forward.

What They Did
The way the team made the human-monkey embryo is similar to previous attempts at half-human chimeras.

Here’s how it goes. They used de-programmed, or “reverted,” human stem cells, called induced pluripotent stem cells (iPSCs). These cells often start from skin cells, and are chemically treated to revert to the stem cell stage, gaining back the superpower to grow into almost any type of cell: heart, lung, brain…you get the idea. The next step is preparing the monkey component, a fertilized and healthy monkey egg that develops for six days in a Petri dish. By this point, the embryo is ready for implantation into the uterus, which kicks off the whole development process.

This is where the chimera jab comes in. Using a tiny needle, the team injected each embryo with 25 human cells, and babied them for another day. “Until recently the experiment would have ended there,” wrote Drs. Hank Greely and Nita Farahany, two prominent bioethicists who wrote an accompanying expert take, but were not involved in the study.

But the team took it way further. Using a biological trick, the embryos attached to the Petri dish as they would to a womb. The human cells survived after the artificial “implantation,” and—surprisingly—tended to physically group together, away from monkey cells.

The weird segregation led the team to further explore why human cells don’t play nice with those of another species. Using a big data approach, the team scouted how genes in human cells talked to their monkey hosts. What’s surprising, the team said, is that adding human cells into the monkey embryos fundamentally changed both. Rather than each behaving as they would have in their normal environment, the two species of cells influenced each other, even when physically separated. The human cells, for example, tweaked the biochemical messengers that monkey cells—and the “goop” surrounding those cells—use to talk to one another.

In other words, in contrast to oil and water, human and monkey cells seemed to communicate and change the other’s biology without needing too much outside whisking. Human iPSCs began to behave more like monkey cells, whereas monkey embryos became slightly more human.

Ok, But Why?
The main reasons the team went for a monkey hybrid, rather than the “safer” pig or rat alternative, was because of our similarities to monkeys. As the authors argue, being genetically “closer” in evolutionary terms makes it easier to form chimeras. In turn, the resulting embryos also make it possible to study early human development and build human tissues and organs for replacement.

“Historically, the generation of human-animal chimeras has suffered from low efficiency,” said Izpisua Belmonte. “Generation of a chimera between human and non-human primate, a species more closely related to humans along the evolutionary timeline than all previously used species, will allow us to gain better insight into whether there are evolutionarily imposed barriers to chimera generation and if there are any means by which we can overcome them.”

A Controversial Future
That argument isn’t convincing to some.

In terms of organ replacement, monkeys are very expensive (and cognitively advanced) donors compared to pigs, the latter of which have been the primary research host for growing human organs. While difficult to genetically engineer to fit human needs, pigs are more socially acceptable as organ “donors”—many of us don’t bat an eye at eating ham or bacon—whereas the concept of extracting humanoid tissue from monkeys is extremely uncomfortable.

A human-monkey hybrid could be especially helpful for studying neurodevelopment, but that directly butts heads with the “human cells in animal brains” problem. Even when such an embryo is not brought to term, it’s hard to imagine anyone who’s ready to study the brain of a potentially viable animal fetus with human cells wired into its neural networks.

There’s also the “sledgehammer” aspect of the study that makes scientists cringe. “Direct transplantation of cells into particular regions, or organs [of an animal], allows researchers to predict where and how the cells might integrate,” said Greely and Farahany. This means they might be able to predict if the injected human cells end up in a “boring” area, like the gallbladder, or a more “sensitive” area, like the brain. But with the current technique, we’re unsure where the human cells could eventually migrate to and grow.

Yet despite the ick factor, human-monkey embryos circumvent the ethical quandaries around using aborted tissue for research. These hybrid embryos may present the closest models to early human development that we can get without dipping into the abortion debate.

In their commentary, Greely and Farahany laid out four main aspects to consider before moving ahead with the controversial field. First and foremost is animal welfare, which is “especially true for non-human primates,” as they’re mentally close to us. There’s also the need for consent from human donors, which form the basis of the injected iPSCs, as some may be uncomfortable with the endeavor itself. Like organ donors, people need to be fully informed.

Third and fourth, public discourse is absolutely needed, as people may strongly disapprove of the idea of mixing human tissue or organs with animals. For now, the human-monkey embryos have a short life. But as technology gets better, and based on previous similar experiments with other chimeras, the next step in this venture is to transplant the embryo into a living animal host’s uterus, which could nurture it to grow further.

For now, that’s a red line for human-monkey embryos, and the technology isn’t there yet. But if the surprise of CRISPR babies has taught us anything, it’s that as a society we need to discourage, yet prepare for, a lone wolf who’s willing to step over the line—that is, bringing a part-human, part-animal embryo to term.

“We must begin to think about that possibility,” said Greely and Farahany. With the study, we know that “those future experiments are now at least plausible.”

Image Credit: A human-monkey chimera embryo, photo by Weizhi Ji, Kunming University of Science and Technology Continue reading

Posted in Human Robots

#439110 Robotic Exoskeletons Could One Day Walk ...

Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.

Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.

One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.

Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.

Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.

Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.

According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.

In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”

In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .

Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.

However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading

Posted in Human Robots