Tag Archives: human
#437386 Scary A.I. more intelligent than you
GPT-3 (Generative Pre-trained Transformer 3), is an artificial intelligence language generator that uses deep learning to produce human-like output. The high quality of its text is very difficult to distinguish from a human’s. Many scientists, researchers and engineers (including Stephen … Continue reading
#439147 Robots Versus Toasters: How The Power of ...
Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives.
The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!
When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.
In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”
What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)
Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.
The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.
The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.
Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.
If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.
We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant.
Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.
iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.
Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.
Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.
Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.
Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.
As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! Continue reading
#439142 Scientists Grew Human Cells in Monkey ...
Few things in science freak people out more than human-animal hybrids. Named chimeras, after the mythical Greek creature that’s an amalgam of different beasts, these part-human, part-animal embryos have come onto the scene to transform our understanding of what makes us “human.”
If theoretically grown to term, chimeras would be an endless resource for replacement human organs. They’re a window into the very early stages of human development, allowing scientists to probe the mystery of the first dozen days after sperm-meets-egg. They could help map out how our brains build their early architecture, potentially solving the age-old question of why our neural networks are so powerful—and how their wiring could go wrong.
The trouble with all of this? The embryos are part human. The idea of human hearts or livers growing inside an animal may be icky, but tolerable, to some. Human neurons crafting a brain inside a hybrid embryo—potentially leading to consciousness—is a horror scenario. For years, scientists have flirted with ethical boundaries by mixing human cells with those of rats and pigs, which are relatively far from us in evolutionary terms, to reduce the chance of a mentally “humanized” chimera.
This week, scientists crossed a line.
In a study led by Dr. Juan Carlos Izpisua Belmonte, a prominent stem cell biologist at the Salk Institute for Biological Studies, the team reported the first vetted case of a human-monkey hybrid embryo.
Reflexive shudder aside, the study is a technological tour-de-force. The scientists were able to watch the hybrid embryo develop for 20 days outside the womb, far longer than any previous attempts. Putting the timeline into context, it’s about 20 percent of a monkey’s gestation period.
Although only 3 out of over 100 attempts survived past that point, the viable embryos contained a shockingly high amount of human cells—about one-third of the entire cell population. If able to further develop, those human contributions could, in theory, substantially form the biological architecture of the body, and perhaps the mind, of a human-monkey fetus.
I can’t stress this enough: the technology isn’t there yet to bring Planet of the Apes to life. Strict regulations also prohibit growing chimera embryos past the first few weeks. It’s telling that Izpisua Belmonte collaborated with Chinese labs, which have far fewer ethical regulations than the US.
But the line’s been crossed, and there’s no going back. Here’s what they did, why they did it, and reasons to justify—or limit—similar tests going forward.
What They Did
The way the team made the human-monkey embryo is similar to previous attempts at half-human chimeras.
Here’s how it goes. They used de-programmed, or “reverted,” human stem cells, called induced pluripotent stem cells (iPSCs). These cells often start from skin cells, and are chemically treated to revert to the stem cell stage, gaining back the superpower to grow into almost any type of cell: heart, lung, brain…you get the idea. The next step is preparing the monkey component, a fertilized and healthy monkey egg that develops for six days in a Petri dish. By this point, the embryo is ready for implantation into the uterus, which kicks off the whole development process.
This is where the chimera jab comes in. Using a tiny needle, the team injected each embryo with 25 human cells, and babied them for another day. “Until recently the experiment would have ended there,” wrote Drs. Hank Greely and Nita Farahany, two prominent bioethicists who wrote an accompanying expert take, but were not involved in the study.
But the team took it way further. Using a biological trick, the embryos attached to the Petri dish as they would to a womb. The human cells survived after the artificial “implantation,” and—surprisingly—tended to physically group together, away from monkey cells.
The weird segregation led the team to further explore why human cells don’t play nice with those of another species. Using a big data approach, the team scouted how genes in human cells talked to their monkey hosts. What’s surprising, the team said, is that adding human cells into the monkey embryos fundamentally changed both. Rather than each behaving as they would have in their normal environment, the two species of cells influenced each other, even when physically separated. The human cells, for example, tweaked the biochemical messengers that monkey cells—and the “goop” surrounding those cells—use to talk to one another.
In other words, in contrast to oil and water, human and monkey cells seemed to communicate and change the other’s biology without needing too much outside whisking. Human iPSCs began to behave more like monkey cells, whereas monkey embryos became slightly more human.
Ok, But Why?
The main reasons the team went for a monkey hybrid, rather than the “safer” pig or rat alternative, was because of our similarities to monkeys. As the authors argue, being genetically “closer” in evolutionary terms makes it easier to form chimeras. In turn, the resulting embryos also make it possible to study early human development and build human tissues and organs for replacement.
“Historically, the generation of human-animal chimeras has suffered from low efficiency,” said Izpisua Belmonte. “Generation of a chimera between human and non-human primate, a species more closely related to humans along the evolutionary timeline than all previously used species, will allow us to gain better insight into whether there are evolutionarily imposed barriers to chimera generation and if there are any means by which we can overcome them.”
A Controversial Future
That argument isn’t convincing to some.
In terms of organ replacement, monkeys are very expensive (and cognitively advanced) donors compared to pigs, the latter of which have been the primary research host for growing human organs. While difficult to genetically engineer to fit human needs, pigs are more socially acceptable as organ “donors”—many of us don’t bat an eye at eating ham or bacon—whereas the concept of extracting humanoid tissue from monkeys is extremely uncomfortable.
A human-monkey hybrid could be especially helpful for studying neurodevelopment, but that directly butts heads with the “human cells in animal brains” problem. Even when such an embryo is not brought to term, it’s hard to imagine anyone who’s ready to study the brain of a potentially viable animal fetus with human cells wired into its neural networks.
There’s also the “sledgehammer” aspect of the study that makes scientists cringe. “Direct transplantation of cells into particular regions, or organs [of an animal], allows researchers to predict where and how the cells might integrate,” said Greely and Farahany. This means they might be able to predict if the injected human cells end up in a “boring” area, like the gallbladder, or a more “sensitive” area, like the brain. But with the current technique, we’re unsure where the human cells could eventually migrate to and grow.
Yet despite the ick factor, human-monkey embryos circumvent the ethical quandaries around using aborted tissue for research. These hybrid embryos may present the closest models to early human development that we can get without dipping into the abortion debate.
In their commentary, Greely and Farahany laid out four main aspects to consider before moving ahead with the controversial field. First and foremost is animal welfare, which is “especially true for non-human primates,” as they’re mentally close to us. There’s also the need for consent from human donors, which form the basis of the injected iPSCs, as some may be uncomfortable with the endeavor itself. Like organ donors, people need to be fully informed.
Third and fourth, public discourse is absolutely needed, as people may strongly disapprove of the idea of mixing human tissue or organs with animals. For now, the human-monkey embryos have a short life. But as technology gets better, and based on previous similar experiments with other chimeras, the next step in this venture is to transplant the embryo into a living animal host’s uterus, which could nurture it to grow further.
For now, that’s a red line for human-monkey embryos, and the technology isn’t there yet. But if the surprise of CRISPR babies has taught us anything, it’s that as a society we need to discourage, yet prepare for, a lone wolf who’s willing to step over the line—that is, bringing a part-human, part-animal embryo to term.
“We must begin to think about that possibility,” said Greely and Farahany. With the study, we know that “those future experiments are now at least plausible.”
Image Credit: A human-monkey chimera embryo, photo by Weizhi Ji, Kunming University of Science and Technology Continue reading
#439119 No Human Can Match This High-Speed ...
Today at ProMat, a company called Pickle Robots is announcing Dill, a robot that can unload boxes from the back of a trailer at places like ecommerce fulfillment warehouses at very high speeds. With a peak box unloading rate of 1800 boxes per hour and a payload of up to 25 kg, Dill can substantially outperform even an expert human, and it can keep going pretty much forever as long as you have it plugged into the wall.
Pickle Robots says that Dill’s approach to the box unloading task is unique in a couple of ways. First, it can handle messy trailers filled with a jumble of boxes of different shapes, colors, sizes, and weights. And second, from the get-go it’s intended to work under human supervision, relying on people to step in and handle edge cases.
Pickle’s “Dill” robot is based around a Kuka arm with up to 30 kg of payload. It uses two Intel L515s (Lidar-based RGB-D cameras) for box detection. The system is mounted on a wheeled base, and after getting positioned at the back of a trailer by a human operator, it’ll crawl forward by itself as it picks its way into the trailer. We’re told that the rate at which the robot can shift boxes averages 1600 per hour, with a peak speed closer to 1800 boxes per hour. A single human in top form can move about 800 boxes per hour, so Dill is very, very fast. In the video, you can see the robot slow down on some packages, and Pickle CEO Andrew Meyer says that’s because “we probably have a tenuous grasp on that package. As we continue to improve the gripper, we will be able to keep the speed up on more cycles.”
While the video shows Dill operating at speed autonomously, the company says it’s designed to function under human supervision. From the press release: “To maintain these speeds, Dill needs people to supervise the operation and lend an occasional helping hand, stepping in every so often to pick up any dropped packages and handle irregular items.” Typically, Meyer says, that means one person for every five robots depending on the use case. Although if you have only one robot, it’ll still require someone to keep an eye on it. A supervisor is not occupied with the task full-time, to be clear. They can also be doing something else while the robot works—although the longer a human takes to respond to issues the robot may have, the slower its effective speed will be. Typically, the company says, a human will need to help out the robot once every five minutes when it’s doing something particularly complex. But even in situations with lots of hard-to-handle boxes resulting in relatively low efficiency, Meyer says that users can expect speeds exceeding 1000 boxes per hour.
Photo: Pickle Robots
Pickle Robots’ gripper, which includes a high contact area suction system and a retractable plate to help the robot quickly flip boxes.
From Pickle Robots’ video, it’s fairly obvious that the comparison that Pickle wants you to make is to Boston Dynamics’ Stretch robot, which has a peak box moving rate of 800 boxes per hour. Yes, Pickle’s robot is twice as fast. But it’s also a unitasker, designed to unload boxes from trucks, and that’s it. Focusing on a very specific problem is a good approach for robots, because then you can design a robot that does an excellent job of solving that problem, which is what Pickle has done. Boston Dynamics has chosen a different route with Stretch, which is to build a robot that has the potential to do many other warehouse tasks, although not nearly as optimally.
The other big difference between Boston Dynamics and Pickle is, of course, that Boston Dynamics is focusing on autonomy. Meanwhile, Pickle, Meyer says in a press release, “resisted the fool’s errand of trying to create a system that could work entirely unsupervised.” Personally, I disagree that trying to create a system that could work entirely unsupervised is a fool’s errand. Approaching practical commercial robotics (in any context) from a perspective of requiring complete unsupervised autonomy is generally not practical right now outside of highly structured environments. But many companies do have goals that include unsupervised operation while still acknowledging that occasionally their robots will need a human to step in and help. In fact, these companies are (generally) doing exactly what Pickle is doing in practice: they’re deploying robots with the goal of fully unsupervised autonomy, while keeping humans available as they work their way towards that goal. The difference, perhaps, is philosophical—some companies see unsupervised operation as the future of robotics in these specific contexts, while Pickle does not. We asked Meyer about why this is. He replied:
Some problems are hardware-related and not likely to yield an automated solution anytime soon. For example, the gripper is physically incapable of grasping some objects, like car tires, no matter what intelligence the robot has. A part might start to wear out, like a spring on the gripper, and the gripper can behave unpredictably. Things can be too heavy. A sensor might get knocked out of place, dust might get on the camera lens. Or an already damaged package falls apart when you pick it up, and dumps its contents on the ground.
Other problems can go away over time as the algorithms learn and the engineers innovate in small ways. For example, learning not to pick packages that will cause a bunch more to fall down, learning to approach boxes in the corner from the side, or—and this was a real issue in production for a couple days—learning to avoid picking directly on labels where they might peel off from suction.
Machine learning algorithms, on both the perception and action sides of the story, are critical ingredients for making any of this work. However, even with them your engineering team still has to do a lot of problem solving wherever the AI is struggling. At some point you run out of engineering resources to solve all these problems in the long tail. When we talk about problems that require AI algorithms as capable as people are, we mean ones where the target on the reliability curve (99.99999% in the case of self driving, for example) is out of reach in this way. I think the big lesson from self-driving cars is that chasing that long tail of edge cases is really, really hard. We realized that in the loading dock, you can still deliver tremendous value to the customer even if you assume you can only handle 98% of the cases.
These long-tail problems are everywhere in robotics, but again, some people believe that levels of reliability that are usable for unsupervised operation (at least in some specific contexts) are more near-term achievable than others do. In Pickle’s case, emphasizing human supervision means that they may be able to deploy faster and more reliably and at lower cost and with higher performance—we’ll just have to see how long it takes for other companies to come through with robots that are able to do the same tasks without human supervision.
Photo: Pickle Robots
Pickle robots is also working on other high speed package sorting systems.
We asked Meyer how much Dill costs, and to our surprise, he gave us a candid answer: Depending on the configuration, the system can cost anywhere from $50-100k to deploy and about that same amount per year to operate. Meyer points out that you can’t really compare the robot to a human (or humans) simply on speed, since with the robot, you don’t have to worry about injuries or improper sorting of packages or training or turnover. While Pickle is currently working on several other configurations of robots for package handling, this particular truck unloading configuration will be shipping to customers next year. Continue reading
#439110 Robotic Exoskeletons Could One Day Walk ...
Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.
Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.
One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.
Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.
Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.
Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.
According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.
In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”
In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .
Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.
However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading