Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439849 Boots Full of Nickels Help Mini Cheetah ...

As quadrupedal robots learn to do more and more dynamic tasks, they're likely to spend more and more time not on their feet. Not falling over, necessarily (although that's inevitable of course, because they're legged robots after all)—but just being in flight in one way or another. The most risky of flight phases would be a fall from a substantial height, because it's almost certain to break your very expensive robot and any payload it might have.
Falls being bad is not a problem unique to robots, and it's not surprising that quadrupeds in nature have already solved it. Or at least, it's already been solved by cats, which are able to reliably land on their feet to mitigate fall damage. To teach quadrupedal robots this trick, roboticists from the University of Notre Dame have been teaching a Mini Cheetah quadruped some mid-air self-righting skills, with the aid of boots full of nickels.

If this research looks a little bit familiar, it's because we recently covered some work from ETH Zurich that looked at using legs to reorient their SpaceBok quadruped in microgravity. This work with Mini Cheetah has to contend with Earth gravity, however, which puts some fairly severe time constraints on the whole reorientation thing with the penalty for failure being a smashed-up robot rather than just a weird bounce. When we asked the ETH Zurich researchers what might improve the performance of SpaceBok, they told us that “heavy shoes would definitely help,” and it looks like the folks from Notre Dame had the same idea, which they were able to implement on Mini Cheetah.

Mini Cheetah's legs (like the legs of many robots) were specifically designed to be lightweight because they have to move quickly, and you want to minimize the mass that moves back and forth with every step to make the robot as efficient as possible. But for a robot to reorient itself in mid air, it's got to start swinging as much mass around as it can. Each of Mini Cheetah's legs has been modified with 3D printed boots, packed with two rolls of American nickels each, adding about 500g to each foot—enough to move the robot around like it needs to. The reason why nickel boots are important is because the only way that Mini Cheetah has of changing its orientation while falling is by flailing its legs around. When its legs move one way, its body will move the other way, and the heavier the legs are, the more force they can exert on the body.
As with everything robotics, getting the hardware to do what you want it to do is only half the battle. Or sometimes much, much less than half the battle. The challenge with Mini Cheetah flipping itself over is that it has a very, very small amount of time to figure out how to do it properly. It has to detect that it's falling, figure out what orientation it's in, make a plan of how to get itself feet down, and then execute on that plan successfully. The robot doesn't have enough time to put a whole heck of a lot of thought into things as it starts to plummet, so the technique that the researchers came up with to enable it to do what it needs to do is called a “reflex” approach. Vince Kurtz, first author on the paper describing this technique, explains how it works:
While trajectory optimization algorithms keep getting better and better, they still aren't quite fast enough to find a solution from scratch in the fraction of a second between when the robot detects a fall and when it needs to start a recovery motion. We got around this by dropping the robot a bunch of times in simulation, where we can take as much time as we need to find a solution, and training a neural network to imitate the trajectory optimizer. The trained neural network maps initial orientations to trajectories that land the robot on its feet. We call this the “reflex” approach, since the neural network has basically learned an automatic response that can be executed when the robot detects that it's falling.This technique works quite well, but there are a few constraints, most of which wouldn't seem so bad if we weren't comparing quadrupedal robots to quadrupedal animals. Cats are just, like, super competent at what they do, says Kurtz, and being able to mimic their ability to rapidly twist themselves into a favorable landing configuration from any starting orientation is just going to be really hard for a robot to pull off:
The more I do robotics research the more I appreciate how amazing nature is, and this project is a great example of that. Cats can do a full 180° rotation when dropped from about shoulder height. Our robot ran up against torque limits when rotating 90° from about 10ft off the ground. Using the full 3D motion would be a big improvement (rotating sideways should be easier because the robot's moment of inertia is smaller in that direction), though I'd be surprised if that alone got us to cat-level performance.
The biggest challenge that I see in going from 2D to 3D is self-collisions. Keeping the robot from hitting itself seems like it should be simple, but self-collisions turn out to impose rather nasty non-convex constraints that make it numerically difficult (though not impossible) for trajectory optimization algorithms to find high-quality solutions.Lastly, we asked Kurtz to talk a bit about whether it's worth exploring flexible actuated spines for quadrupedal robots. We know that such spines offer many advantages (a distant relative of Mini Cheetah had one, for example), but that they're also quite complex. So is it worth it?
This is an interesting question. Certainly in the case of the falling cat problem a flexible spine would help, both in terms of having a naturally flexible mass distribution and in terms of controller design, since we might be able to directly imitate the “bend-and-twist” motion of cats. Similarly, a flexible spine might help for tasks with large flight phases, like the jumping in space problems discussed in the ETH paper.
With that being said, mid-air reorientation is not the primary task of most quadruped robots, and it's not obvious to me that a flexible spine would help much for walking, running, or scrambling over uneven terrain. Also, existing hardware platforms with rigid backs like the Mini Cheetah are quite capable and I think we still haven't unlocked the full potential of these robots. Control algorithms are still the primary limiting factor for today's legged robots, and adding a flexible spine would probably make for even more difficult control problems.Mini Cheetah, the Falling Cat: A Case Study in Machine Learning and Trajectory Optimization for Robot Acrobatics, by Vince Kurtz, He Li, Patrick M. Wensing, and Hai Lin from University of Notre Dame, is available on arXiv. Continue reading

Posted in Human Robots

#439847 Tiny hand-shaped gripper can grasp and ...

A team of researchers affiliated with a host of institutions in the Republic of Korea has developed a tiny, soft robotic hand that can grasp small objects and measure their temperature. They have published their results in the journal Science Robotics. Continue reading

Posted in Human Robots

#439842 AI-Powered Brain Implant Eases Severe ...

Sarah hadn’t laughed in five years.

At 36 years old, the avid home cook has struggled with depression since early childhood. She tried the whole range of antidepressant medications and therapy for decades. Nothing worked. One night, five years ago, driving home from work, she had one thought in her mind: this is it. I’m done.

Luckily she made it home safe. And soon she was offered an intriguing new possibility to tackle her symptoms—a little chip, implanted into her brain, that captures the unique neural signals encoding her depression. Once the implant detects those signals, it zaps them away with a brief electrical jolt, like adding noise to an enemy’s digital transmissions to scramble their original message. When that message triggers depression, hijacking neural communications is exactly what we want to do.

Flash forward several years, and Sarah has her depression under control for the first time in her life. Her suicidal thoughts evaporated. After quitting her tech job due to her condition, she’s now back on her feet, enrolled in data analytics classes and taking care of her elderly mother. “For the first time,” she said, “I’m finally laughing.”

Sarah’s recovery is just one case. But it signifies a new era for the technology underlying her stunning improvement. It’s one of the first cases in which a personalized “brain pacemaker” can stealthily tap into, decipher, and alter a person’s mood and introspection based on their own unique electrical brain signatures. And while those implants have achieved stunning medical miracles in other areas—such as allowing people with paralysis to walk again—Sarah’s recovery is some of the strongest evidence yet that a computer chip, in a brain, powered by AI, can fundamentally alter our perception of life. It’s the closest to reading and repairing a troubled mind that we’ve ever gotten.

“We haven’t been able to do this kind of personalized therapy previously in psychiatry,” said study lead Dr. Katherine Scangos at UCSF. “This success in itself is an incredible advancement in our knowledge of the brain function that underlies mental illness.”

Brain Pacemaker
The key to Sarah’s recovery is a brain-machine interface.

Roughly the size of a matchbox, the implant sits inside the brain, silently listening to and decoding its electrical signals. Using those signals, it’s possible to control other parts of the brain or body. Brain implants have given people with lower body paralysis the ability to walk again. They’ve allowed amputees to control robotic hands with just a thought. They’ve opened up a world of sensations, integrating feedback from cyborg-like artificial limbs that transmit signals directly into the brain.

But Sarah’s implant is different.

Sensation and movement are generally controlled by relatively well-defined circuits in the outermost layer of the brain: the cortex. Emotion and mood are also products of our brain’s electrical signals, but they tend to stem from deeper neural networks hidden at the center of the brain. One way to tap into those circuits is called deep brain stimulation (DBS), a method pioneered in the ’80s that’s been used to treat severe Parkinson’s disease and epilepsy, particularly for cases that don’t usually respond to medication.

Sarah’s neural implant takes this route: it listens in on the chatter between neurons deep within the brain to decode mood.

But where is mood in the brain? One particular problem, the authors explained, is that unlike movement, there is no “depression brain region.” Rather, emotions are regulated by intricate, intertwining networks across multiple brain regions. Adding to that complexity is the fact that we’re all neural snowflakes—each of us have uniquely personalized brain network connections.

In other words, zapping my circuit to reduce depression might not work for you. DBS, for example, has previously been studied for treating depression. But despite decades of research, it’s not federally approved due to inconsistent results. The culprit? The electrical stimulation patterns used in those trials were constant and engineered to be one-size-fits-all. Have you ever tried buying socks or PJs at a department store, seen the tag that says “one size,” and they don’t fit? Yeah. DBS has brought about remarkable improvements for some people with depression—ill-fitting socks are better than none in a pinch. But with increasingly sophisticated neuroengineering methods, we can do better.

The solution? Let’s make altering your brain more personal.

Unconscious Reprieve
That’s the route Sarah’s psychologist and UCSF neurosurgeon Dr. Edward Chang and colleagues took in the new study.

The first step in detecting depression-related activity in the brain was to be able to listen in. The team implanted 10 electrodes in Sarah’s brain, targeting multiple regions encoding emotion-related circuits. They then recorded electrical signals from these regions over the course of 10 days, while Sarah journaled about how she felt each day—happy or low. In the background, the team peeked into her brain activity patterns, a symphony of electrical signals in multiple frequencies, like overlapping waves on the ocean.

One particular brain wave emerged. It stemmed from the amygdala, a region normally involved in fear, lust, and other powerful emotions. Software-based mapping pinpointed the node as a powerful guide to Sarah’s mental state.

In contrast, another area tucked deep inside the brain, the ventral capsule/ventral striatum (VC/VS), emerged as a place to stimulate with little bouts of electricity to disrupt patterns leading to feelings of depression.

The team next implanted an FDA-approved neural pacemaker into the right brain lobe, with two sensing leads to capture activity from the amygdala and two stimulating wires to zap the VC/VS. The implant was previously used in epilepsy treatments and continuously senses neural activity. It’s both off-the-shelf and programmable, in that the authors could instruct it to detect “pre-specified patterns of activation” related to Sarah’s depressive episodes, and deliver short bursts of electrical stimulation only then. Just randomly stimulating the amygdala could “actually cause more stress and more depression symptoms,” said Dr. Chang in a press conference.

Brain surgery wasn’t easy. But to Sarah, drilling several holes into her brain was less difficult than the emotional pain of her depression. Every day during the trial, she waved a figure-eight-shaped wand over her head, which wirelessly captured 90 seconds of her brain’s electrical activity while reporting on her mental health.

When the stimulator turned on (even when she wasn’t aware it was on), “a joyous feeling just washed over me,” she said.

A New Neurological Future
For now, the results are just for one person. But if repeated—and Sarah could be a unique case—they suggest we’re finally at the point where we can tap into each unique person’s emotional mindset and fundamentally alter their perception of life.

And with that comes intense responsibility. Sarah’s neural “imprint” of her depression is tailored to her. It might be completely different for someone else. It’s something for future studies to dig into. But what’s clear is that it’s possible to regulate a person’s emotions with an AI-powered brain implant. And if other neurological disorders can be decoded in a similar way, we could use brain pacemakers to treat some of our toughest mental foes.

“God, the color differentiation is gorgeous,” said Sarah as her implant turned on. “I feel alert. I feel present.”

Image Credit: Sarah in her community garden, photo by John Lok/UCSF 2021 Continue reading

Posted in Human Robots

#439836 Video Friday: Dusty at Work

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USALet us know if you have suggestions for next week, and enjoy today's videos.
I love watching Dusty Robotics' field printer at work. I don't know whether it's intentional or not, but it's go so much personality somehow.

[ Dusty Robotics ]
A busy commuter is ready to walk out the door, only to realize they've misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys. Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.
While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn't quite fast enough yet for these uses.[ MIT ]
CSIRO Data61 had, I'm pretty sure, the most massive robots in the entire SubT competition. And this is how you solve doors with a massive robot.

[ CSIRO ]
You know how robots are supposed to be doing things that are too dangerous for humans? I think sailing through a hurricane qualifies..

This second video, also captured by this poor Saildrone, is if anything even worse:

[ Saildrone ] via [ NOAA ]
Soft Robotics can handle my taquitos anytime.

[ Soft Robotics ]
This is brilliant, if likely unaffordable for most people.

[ Eric Paulos ]
I do not understand this robot at all, nor can I tell whether it's friendly or potentially dangerous or both.

[ Keunwook Kim ]
This sort of thing really shouldn't have to exist for social home robots, but I'm glad it does, I guess?

It costs $100, though.
[ Digital Dream Labs ]
If you watch this video closely, you'll see that whenever a simulated ANYmal falls over, it vanishes from existence. This is a new technique for teaching robots to walk by threatening them with extinction if they fail.

But seriously how do I get this as a screensaver?
[ RSL ]
Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs got their start, using drones for STEM programs, and how drones impact conservation and agriculture.
[ Zimbabwe Flying Labs ]
DARPA thoughtfully provides a video tour of the location of every artifact on the SubT Final prize course. Some of them are hidden extraordinarily well.

Also posted by DARPA this week are full prize round run videos for every team; here are the top three: MARBLE, CSIRO Data61, and CERBERUS.

[ DARPA SubT ]
An ICRA 2021 plenary talk from Fumihito Arai at the University of Tokyo, on “Robotics and Automation in Micro & Nano-Scales.”
[ ICRA 2021 ]
This week's UPenn GRASP Lab Seminar comes from Rahul Mangharam, on “What can we learn from Autonomous Racing?”

[ UPenn ] Continue reading

Posted in Human Robots

#439832 This Week’s Awesome Tech Stories From ...

NEUROSCIENCE
How the World’s Biggest Brain Maps Could Transform Neuroscience
Alison Abbott | Nature
“To truly understand how the brain works, neuroscientists also need to know how each of the roughly 1,000 types of cell thought to exist in the brain speak to each other in their different electrical dialects. With that kind of complete, finely contoured map, they could really begin to explain the networks that drive how we think and behave.”

GENE THERAPY
A Gene-Editing Experiment Let These Patients With Vision Loss See Color Again
Rob Stein | NPR
“Carlene Knight’s vision was so bad that she couldn’t even maneuver around the call center where she works using her cane. …But that’s changed as a result of volunteering for a landmark medical experiment. …Knight is one of seven patients with a rare eye disease who volunteered to let doctors modify their DNA by injecting the revolutionary gene-editing tool CRISPR directly into cells that are still in their bodies.”

INTERFACES
Light Field Lab Shows off Solidlight High-Res Holographic Display
Dean Takahashi | VentureBeat
“…the company [says] it is the highest-resolution holographic display ever designed. And yes, the little chameleon that I saw floating in the air looked a lot better than the pseudo-hologram of Princess Leia in the original Star Wars movie. While it’s not hard to beat the vision of holograms in a movie from 1977, it has taken an extraordinarily long time to create real holograms that look good.”

TRANSPORTATION
Airless Tires Are Finally Coming in 2024: Here’s Why You’ll Want a Set
Brian Cooley | CNET
“Nails become minor annoyances and sidewall cuts that usually render a tire unrepairable are no longer possible. There would be no need to check tire inflation (you’ve probably ignored my admonitions to do that anyway) and we’d say goodbye to spare tires, jacks and inflation kits that most drivers view as mysterious objects anyway. Blowouts that cause thousands of crashes a year would be impossible.”

FUTURE
These 5 Recent Advances Are Changing Everything We Thought We Knew About Electronics
Ethan Siegel | Big Think
“As we race to miniaturize electronics, to monitor more and more aspects of our lives and our reality, to transmit greater amounts of data with smaller amounts of power, and to interconnect our devices to one another, we quickly run into the limits of these classical technologies. But five advances are all coming together in the early 21st century, and they’re already beginning to transform our modern world. Here’s how it’s all going down.”

TECH
The Facebook Whistleblower Says Its Algorithms Are Dangerous. Here’s Why.
Karen Hao | MIT Technology Review
“Frances Haugen’s testimony at the Senate hearing today raised serious questions about how Facebook’s algorithms work—and echoes many findings from our previous investigation. …We pulled together the most relevant parts of our investigation and other reporting to give more context to Haugen’s testimony.”

COMPUTING
D-Wave Plans to Build a Gate-Model Quantum Computer
Frederic Lardinois | TechCrunch
“For more than 20 years, D-Wave has been synonymous with quantum annealing. …But as the company announced at its Qubits conference today, a superconducting gate-model quantum computer—of the kind IBM and others currently offer—is now also on its roadmap. D-Wave believes the combination of annealing, gate-model quantum computing and classic machines is what its businesses’ users will need to get the most value from this technology.”

ENERGY
The Decreasing Cost of Renewables Unlikely to Plateau Any Time Soon
Doug Johnson | Ars Technica
“Past projections of energy costs have consistently underestimated just how cheap renewable energy would be in the future, as well as the benefits of rolling them out quickly, according to a new [University of Oxford] report. …if solar, wind, and the myriad other green energy tools followed the deployment trends they are projected to see in the next decade, in 25 years the world could potentially see a net-zero energy system.”

ARTIFICIAL INTELLIGENCE
The Turbulent Past and Uncertain Future of Artificial Intelligence
Eliza Strickland | IEEE Spectrum
“Today, even as AI is revolutionizing industries and threatening to upend the global labor market, many experts are wondering if today’s AI is reaching its limits. …Yet there’s little sense of doom among researchers. Yes, it’s possible that we’re in for yet another AI winter in the not-so-distant future. But this might just be the time when inspired engineers finally usher us into an eternal summer of the machine mind.”

INTERNET
Facebook and Google’s New Plan? Own the Internet
James Ball | Wired UK
“The name ‘cloud’ is a linguistic trick—a way of hiding who controls the underlying technology of the internet—and the huge power they wield. Stop to think about it for a moment and the whole notion is bizarre. The cloud is, in fact, a network of cables and servers that cover the world: once the preserve of obscure telecoms firms, it is now, increasingly, owned and controlled by Big Tech—with Google and Facebook claiming a lion’s share.”

SPACE
The Moon Didn’t Die as Soon as We Thought
Tatyana Woodall | MIT Technology Review
“The moon may have been more volcanically active than we realized. Lunar samples that China’s Chang’e 5 spacecraft brought to Earth are revealing new clues about volcanoes and lava plains on the moon’s surface. In a study published [Thursday] in Science, researchers describe the youngest lava samples ever collected on the moon.”

Image Credit: 光曦 刘 / Unsplash Continue reading

Posted in Human Robots