Tag Archives: movement
#439384 Using optogenetics to control movement ...
A team of researchers from the University of Toronto and Lunenfeld-Tanenbaum Research Institute, has developed a technique for controlling the movements of a live nematode using laser light. In their paper published in the journal Science Robotics, the group describes their technique. Adriana San-Miguel with North Carolina State University has published a Focus piece in the same journal issue outlining the work done by the team. Continue reading
#439147 Robots Versus Toasters: How The Power of ...
Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives.
The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!
When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.
In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”
What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)
Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.
The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.
The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.
Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.
If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.
We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant.
Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.
iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.
Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.
Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.
Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.
Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.
As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! Continue reading
#439042 How Scientists Used Ultrasound to Read ...
Thanks to neural implants, mind reading is no longer science fiction.
As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!
Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.
Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.
In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.
“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.
To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.
Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.
Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.
“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.
There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?
One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.
The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.
The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.
Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.
The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.
Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.
That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.
Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.
A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.
There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.
All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.
“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.
Image Credit: Free-Photos / Pixabay Continue reading