Tag Archives: engineering

#435616 Video Friday: AlienGo Quadruped Robot ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

I know you’ve all been closely following our DARPA Subterranean Challenge coverage here and on Twitter, but here are short recap videos of each day just in case you missed something.

[ DARPA SubT ]

After Laikago, Unitree Robotics is now introducing AlienGo, which is looking mighty spry:

We’ve seen MIT’s Mini Cheetah doing backflips earlier this year, but apparently AlienGo is now the largest and heaviest quadruped to perform the maneuver.

[ Unitree ]

The majority of soft robots today rely on external power and control, keeping them tethered to off-board systems or rigged with hard components. Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed soft robotic systems, inspired by origami, that can move and change shape in response to external stimuli, paving the way for fully untethered soft robots.

The Rollbot begins as a flat sheet, about 8 centimeters long and 4 centimeters wide. When placed on a hot surface, about 200°C, one set of hinges folds and the robot curls into a pentagonal wheel.

Another set of hinges is embedded on each of the five sides of the wheel. A hinge folds when in contact with the hot surface, propelling the wheel to turn to the next side, where the next hinge folds. As they roll off the hot surface, the hinges unfold and are ready for the next cycle.

[ Harvard SEAS ]

A new research effort at Caltech aims to help people walk again by combining exoskeletons with spinal stimulation. This initiative, dubbed RoAM (Robotic Assisted Mobility), combines the research of two Caltech roboticists: Aaron Ames, who creates the algorithms that enable walking by bipedal robots and translates these to govern the motion of exoskeletons and prostheses; and Joel Burdick, whose transcutaneous spinal implants have already helped paraplegics in clinical trials to recover some leg function and, crucially, torso control.

[ Caltech ]

Once ExoMars lands, it’s going to have to get itself off of the descent stage and onto the surface, which could be tricky. But practice makes perfect, or as near as you can get on Earth.

That wheel walking technique is pretty cool, and it looks like ExoMars will be able to handle terrain that would scare NASA’s Mars rovers away.

[ ExoMars ]

I am honestly not sure whether this would make the game of golf more or less fun to watch:

[ Nissan ]

Finally, a really exciting use case for Misty!

It can pick up those balls too, right?

[ Misty ]

You know you’re an actual robot if this video doesn’t make you crave Peeps.

[ Soft Robotics ]

COMANOID investigates the deployment of robotic solutions in well-identified Airbus airliner assembly operations that are tedious for human workers and for which access is impossible for wheeled or rail-ported robotic platforms. This video presents a demonstration of autonomous placement of a part inside the aircraft fuselage. The task is performed by TORO, the torque-controlled humanoid robot developed at DLR.

[ COMANOID ]

It’s a little hard to see in this video, but this is a cable-suspended robot arm that has little tiny robot arms that it waves around to help damp down vibrations.

[ CoGiRo ]

This week in Robots in Depth, Per speaks with author Cristina Andersson.

In 2013 she organized events in Finland during European robotics week and found that many people was very interested but that there was also a big lack of knowledge.

She also talks about introducing robotics in society in a way that makes it easy for everyone to understand the benefits as this will make the process much easier. When people see the clear benefits in one field or situation they will be much more interested in bringing robotics in to their private or professional lives.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435591 Video Friday: This Robotic Thread Could ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Eight engineering students from ETH Zurich are working on a year-long focus project to develop a multimodal robot called Dipper, which can fly, swim, dive underwater, and manage that difficult air-water transition:

The robot uses one motor to selectively drive either a propeller or a marine screw depending on whether it’s in flight or not. We’re told that getting the robot to autonomously do the water to air transition is still a work in progress, but that within a few weeks things should be much smoother.

[ Dipper ]

Thanks Simon!

Giving a jellyfish a hug without stressing them out is exactly as hard as you think, but Harvard’s robot will make sure that all jellyfish get the emotional (and physical) support that they need.

The gripper’s six “fingers” are composed of thin, flat strips of silicone with a hollow channel inside bonded to a layer of flexible but stiffer polymer nanofibers. The fingers are attached to a rectangular, 3D-printed plastic “palm” and, when their channels are filled with water, curl in the direction of the nanofiber-coated side. Each finger exerts an extremely low amount of pressure — about 0.0455 kPA, or less than one-tenth of the pressure of a human’s eyelid on their eye. By contrast, current state-of-the-art soft marine grippers, which are used to capture delicate but more robust animals than jellyfish, exert about 1 kPA.

The gripper was successfully able to trap each jellyfish against the palm of the device, and the jellyfish were unable to break free from the fingers’ grasp until the gripper was depressurized. The jellyfish showed no signs of stress or other adverse effects after being released, and the fingers were able to open and close roughly 100 times before showing signs of wear and tear.

[ Harvard ]

MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labyrinthine vasculature of the brain. In the future, this robotic thread may be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.

[ MIT ]

See NASA’s next Mars rover quite literally coming together inside a clean room at the Jet Propulsion Laboratory. This behind-the-scenes look at what goes into building and preparing a rover for Mars, including extensive tests in simulated space environments, was captured from March to July 2019. The rover is expected to launch to the Red Planet in summer 2020 and touch down in February 2021.

The Mars 2020 rover doesn’t have a name yet, but you can give it one! As long as you’re not too old! Which you probably are!

[ Mars 2020 ]

I desperately wish that we could watch this next video at normal speed, not just slowed down, but it’s quite impressive anyway.

Here’s one more video from the Namiki Lab showing some high speed tracking with a pair of very enthusiastic robotic cameras:

[ Namiki Lab ]

Normally, tedious modeling of mechanics, electronics, and information science is required to understand how insects’ or robots’ moving parts coordinate smoothly to take them places. But in a new study, biomechanics researchers at the Georgia Institute of Technology boiled down the sprints of cockroaches to handy principles and equations they then used to make a test robot amble about better.

[ Georgia Tech ]

More magical obstacle-dodging footage from Skydio’s still secret new drone.

We’ve been hard at work extending the capabilities of our upcoming drone, giving you ways to get the control you want without the stress of crashing. The result is you can fly in ways, and get shots, that would simply be impossible any other way. How about flying through obstacles at full speed, backwards?

[ Skydio ]

This is a cute demo with Misty:

[ Misty Robotics ]

We’ve seen pieces of hardware like this before, but always made out of hard materials—a soft version is certainly something new.

Utilizing vacuum power and soft material actuators, we have developed a soft reconfigurable surface (SRS) with multi-modal control and performance capabilities. The SRS is comprised of a square grid array of linear vacuum-powered soft pneumatic actuators (linear V-SPAs), built into plug-and-play modules which enable the arrangement, consolidation, and control of many DoF.

[ RRL ]

The EksoVest is not really a robot, but it’ll make you a cyborg! With super strength!

“This is NOT intended to give you super strength but instead give you super endurance and reduce fatigue so that you have more energy and less soreness at the end of your shift.”

Drat!

[ EksoVest ]

We have created a solution for parents, grandparents, and their children who are living separated. This is an amazing tool to stay connected from a distance through the intimacy that comes through interactive play with a child. For parents who travel for work, deployed military, and families spread across the country, the Cushybot One is much more than a toy; it is the opportunity for maintaining a deep connection with your young child from a distance.

Hmm.

I think the concept here is great, but it’s going to be a serious challenge to successfully commercialize.

[ Indiegogo ]

What happens when you equip RVR with a parachute and send it off a cliff? Watch this episode of RVR Launchpad to find out – then go Behind the Build to see how we (eventually) accomplished this high-flying feat.

[ Sphero ]

These omnidirectional crawler robots aren’t new, but that doesn’t keep them from being fun to watch.

[ NEDO ] via [ Impress ]

We’ll finish up the week with a couple of past ICRA and IROS keynote talks—one by Gill Pratt on The Reliability Challenges of Autonomous Driving, and the other from Peter Hart, on Making Shakey.

[ IEEE RAS ] Continue reading

Posted in Human Robots

#435528 The Time for AI Is Now. Here’s Why

You hear a lot these days about the sheer transformative power of AI.

There’s pure intelligence: DeepMind’s algorithms readily beat humans at Go and StarCraft, and DeepStack triumphs over humans at no-limit hold’em poker. Often, these silicon brains generate gameplay strategies that don’t resemble anything from a human mind.

There’s astonishing speed: algorithms routinely surpass radiologists in diagnosing breast cancer, eye disease, and other ailments visible from medical imaging, essentially collapsing decades of expert training down to a few months.

Although AI’s silent touch is mainly felt today in the technological, financial, and health sectors, its impact across industries is rapidly spreading. At the Singularity University Global Summit in San Francisco this week Neil Jacobstein, Chair of AI and Robotics, painted a picture of a better AI-powered future for humanity that is already here.

Thanks to cloud-based cognitive platforms, sophisticated AI tools like deep learning are no longer relegated to academic labs. For startups looking to tackle humanity’s grand challenges, the tools to efficiently integrate AI into their missions are readily available. The progress of AI is massively accelerating—to the point you need help from AI to track its progress, joked Jacobstein.

Now is the time to consider how AI can impact your industry, and in the process, begin to envision a beneficial relationship with our machine coworkers. As Jacobstein stressed in his talk, the future of a brain-machine mindmeld is a collaborative intelligence that augments our own. “AI is reinventing the way we invent,” he said.

AI’s Rapid Revolution
Machine learning and other AI-based methods may seem academic and abstruse. But Jacobstein pointed out that there are already plenty of real-world AI application frameworks.

Their secret? Rather than coding from scratch, smaller companies—with big visions—are tapping into cloud-based solutions such as Google’s TensorFlow, Microsoft’s Azure, or Amazon’s AWS to kick off their AI journey. These platforms act as all-in-one solutions that not only clean and organize data, but also contain built-in security and drag-and-drop coding that allow anyone to experiment with complicated machine learning algorithms.

Google Cloud’s Anthos, for example, lets anyone migrate data from other servers—IBM Watson or AWS, for example—so users can leverage different computing platforms and algorithms to transform data into insights and solutions.

Rather than coding from scratch, it’s already possible to hop onto a platform and play around with it, said Jacobstein. That’s key: this democratization of AI is how anyone can begin exploring solutions to problems we didn’t even know we had, or those long thought improbable.

The acceleration is only continuing. Much of AI’s mind-bending pace is thanks to a massive infusion of funding. Microsoft recently injected $1 billion into OpenAI, the Elon Musk venture that engineers socially responsible artificial general intelligence (AGI).

The other revolution is in hardware, and Google, IBM, and NVIDIA—among others—are racing to manufacture computing chips tailored to machine learning.

Democratizing AI is like the birth of the printing press. Mechanical printing allowed anyone to become an author; today, an iPhone lets anyone film a movie masterpiece.

However, this diffusion of AI into the fabric of our lives means tech explorers need to bring skepticism to their AI solutions, giving them a dose of empathy, nuance, and humanity.

A Path Towards Ethical AI
The democratization of AI is a double-edged sword: as more people wield the technology’s power in real-world applications, problems embedded in deep learning threaten to disrupt those very judgment calls.

Much of the press on the dangers of AI focuses on superintelligence—AI that’s more adept at learning than humans—taking over the world, said Jacobstein. But the near-term threat, and far more insidious, is in humans misusing the technology.

Deepfakes, for example, allow AI rookies to paste one person’s head on a different body or put words into a person’s mouth. As the panel said, it pays to think of AI as a cybersecurity problem, one with currently shaky accountability and complexity, and one that fails at diversity and bias.

Take bias. Thanks to progress in natural language processing, Google Translate works nearly perfectly today, so much so that many consider the translation problem solved. Not true, the panel said. One famous example is how the algorithm translates gender-neutral terms like “doctor” into “he” and “nurse” into “she.”

These biases reflect our own, and it’s not just a data problem. To truly engineer objective AI systems, ones stripped of our society’s biases, we need to ask who is developing these systems, and consult those who will be impacted by the products. In addition to gender, racial bias is also rampant. For example, one recent report found that a supposedly objective crime-predicting system was trained on falsified data, resulting in outputs that further perpetuate corrupt police practices. Another study from Google just this month found that their hate speech detector more often labeled innocuous tweets from African-Americans as “obscene” compared to tweets from people of other ethnicities.

We often think of building AI as purely an engineering job, the panelists agreed. But similar to gene drives, germ-line genome editing, and other transformative—but dangerous—tools, AI needs to grow under the consultation of policymakers and other stakeholders. It pays to start young: educating newer generations on AI biases will mold malleable minds early, alerting them to the problem of bias and potentially mitigating risks.

As panelist Tess Posner from AI4ALL said, AI is rocket fuel for ambition. If young minds set out using the tools of AI to tackle their chosen problems, while fully aware of its inherent weaknesses, we can begin to build an AI-embedded future that is widely accessible and inclusive.

The bottom line: people who will be impacted by AI need to be in the room at the conception of an AI solution. People will be displaced by the new technology, and ethical AI has to consider how to mitigate human suffering during the transition. Just because AI looks like “magic fairy dust doesn’t mean that you’re home free,” the panelists said. You, the sentient human, bear the burden of being responsible for how you decide to approach the technology.

The time for AI is now. Let’s make it ethical.

Image Credit: GrAI / Shutterstock.com Continue reading

Posted in Human Robots

#435522 Harvard’s Smart Exo-Shorts Talk to the ...

Exosuits don’t generally scream “fashionable” or “svelte.” Take the mind-controlled robotic exoskeleton that allowed a paraplegic man to kick off the World Cup back in 2014. Is it cool? Hell yeah. Is it practical? Not so much.

Yapping about wearability might seem childish when the technology already helps people with impaired mobility move around dexterously. But the lesson of the ill-fated Google Glassholes, which includes an awkward dorky head tilt and an assuming voice command, clearly shows that wearable computer assistants can’t just work technologically—they have to look natural and allow the user to behave like as usual. They have to, in a sense, disappear.

To Dr. Jose Pons at the Legs + Walking Ability Lab in Chicago, exosuits need three main selling points to make it in the real world. One, they have to physically interact with their wearer and seamlessly deliver assistance when needed. Two, they should cognitively interact with the host to guide and control the robot at all times. Finally, they need to feel like a second skin—move with the user without adding too much extra mass or reducing mobility.

This week, a US-Korean collaboration delivered the whole shebang in a Lululemon-style skin-hugging package combined with a retro waist pack. The portable exosuit, weighing only 11 pounds, looks like a pair of spandex shorts but can support the wearer’s hip movement when needed. Unlike their predecessors, the shorts are embedded with sensors that let them know when the wearer is walking versus running by analyzing gait.

Switching between the two movement modes may not seem like much, but what naturally comes to our brains doesn’t translate directly to smart exosuits. “Walking and running have fundamentally different biomechanics, which makes developing devices that assist both gaits challenging,” the team said. Their algorithm, computed in the cloud, allows the wearer to easily switch between both, with the shorts providing appropriate hip support that makes the movement experience seamless.

To Pons, who was not involved in the research but wrote a perspective piece, the study is an exciting step towards future exosuits that will eventually disappear under the skin—that is, implanted neural interfaces to control robotic assistance or activate the user’s own muscles.

“It is realistic to think that we will witness, in the next several years…robust human-robot interfaces to command wearable robotics based on…the neural code of movement in humans,” he said.

A “Smart” Exosuit Hack
There are a few ways you can hack a human body to move with an exosuit. One is using implanted electrodes inside the brain or muscles to decipher movement intent. With heavy practice, a neural implant can help paralyzed people walk again or dexterously move external robotic arms. But because the technique requires surgery, it’s not an immediate sell for people who experience low mobility because of aging or low muscle tone.

The other approach is to look to biophysics. Rather than decoding neural signals that control movement, here the idea is to measure gait and other physical positions in space to decipher intent. As you can probably guess, accurately deciphering user intent isn’t easy, especially when the wearable tries to accommodate multiple gaits. But the gains are many: there’s no surgery involved, and the wearable is low in energy consumption.

Double Trouble
The authors decided to tackle an everyday situation. You’re walking to catch the train to work, realize you’re late, and immediately start sprinting.

That seemingly easy conversion hides a complex switch in biomechanics. When you walk, your legs act like an inverted pendulum that swing towards a dedicated center in a predictable way. When you run, however, the legs move more like a spring-loaded system, and the joints involved in the motion differ from a casual stroll. Engineering an assistive wearable for each is relatively simple; making one for both is exceedingly hard.

Led by Dr. Conor Walsh at Harvard University, the team started with an intuitive idea: assisted walking and running requires specialized “actuation” profiles tailored to both. When the user is moving in a way that doesn’t require assistance, the wearable needs to be out of the way so that it doesn’t restrict mobility. A quick analysis found that assisting hip extension has the largest impact, because it’s important to both gaits and doesn’t add mass to the lower legs.

Building on that insight, the team made a waist belt connected to two thigh wraps, similar to a climbing harness. Two electrical motors embedded inside the device connect the waist belt to other components through a pulley system to help the hip joints move. The whole contraption weighed about 11 lbs and didn’t obstruct natural movement.

Next, the team programmed two separate supporting profiles for walking and running. The goal was to reduce the “metabolic cost” for both movements, so that the wearer expends as little energy as needed. To switch between the two programs, they used a cloud-based classification algorithm to measure changes in energy fluctuation to figure out what mode—running or walking—the user is in.

Smart Booster
Initial trials on treadmills were highly positive. Six male volunteers with similar age and build donned the exosuit and either ran or walked on the treadmill at varying inclines. The algorithm performed perfectly at distinguishing between the two gaits in all conditions, even at steep angles.

An outdoor test with eight volunteers also proved the algorithm nearly perfect. Even on uneven terrain, only two steps out of all test trials were misclassified. In an additional trial on mud or snow, the algorithm performed just as well.

“The system allows the wearer to use their preferred gait for each speed,” the team said.

Software excellence translated to performance. A test found that the exosuit reduced the energy for walking by over nine percent and running by four percent. It may not sound like much, but the range of improvement is meaningful in athletic performance. Putting things into perspective, the team said, the metabolic rate reduction during walking is similar to taking 16 pounds off at the waist.

The Wearable Exosuit Revolution
The study’s lightweight exoshorts are hardly the only players in town. Back in 2017, SRI International’s spin-off, Superflex, engineered an Aura suit to support mobility in the elderly. The Aura used a different mechanism: rather than a pulley system, it incorporated a type of smart material that contracts in a manner similar to human muscles when zapped with electricity.

Embedded with a myriad of sensors for motion, accelerometers and gyroscopes, Aura’s smartness came from mini-computers that measure how fast the wearer is moving and track the user’s posture. The data were integrated and processed locally inside hexagon-shaped computing pods near the thighs and upper back. The pods also acted as the control center for sending electrical zaps to give the wearer a boost when needed.

Around the same time, a collaboration between Harvard’s Wyss Institute and ReWalk Robotics introduced a fabric-based wearable robot to assist a wearer’s legs for balance and movement. Meanwhile, a Swiss team coated normal fabric with electroactive material to weave soft, pliable artificial “muscles” that move with the skin.

Although health support is the current goal, the military is obviously interested in similar technologies to enhance soldiers’ physicality. Superflex’s Aura, for example, was originally inspired by technology born from DARPA’s Warrior Web Program, which aimed to reduce a soldier’s mechanical load.

That said, military gear has had a long history of trickling down to consumer use. Similar to the way camouflage, cargo pants, and GORE-TEX trickled down into the consumer ecosphere, it’s not hard to imagine your local Target eventually stocking intelligent exowear.

Image and Video Credit: Wyss Institute at Harvard University. Continue reading

Posted in Human Robots

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots