Tag Archives: sound

#435626 Video Friday: Watch Robots Make a Crepe ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. Every week, we also post a calendar of upcoming robotics events; here's what we have so far (send us your events!):

Robotronica – August 18, 2019 – Brisbane, Australia
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi
Humanoids 2019 – October 15-17, 2019 – Toronto
ARSO 2019 – October 31-November 2, 2019 – Beijing
ROSCon 2019 – October 31-November 1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today's videos.

Team CoSTAR (JPL, MIT, Caltech, KAIST, LTU) has one of the more diverse teams of robots that we’ve seen:

[ Team CoSTAR ]

A team from Carnegie Mellon University and Oregon State University is sending ground and aerial autonomous robots into a Pittsburgh-area mine to prepare for this month’s DARPA Subterranean Challenge.

“Look at that fire extinguisher, what a beauty!” Expect to hear a lot more of that kind of weirdness during SubT.

[ CMU ]

Unitree Robotics is starting to batch-manufacture Laikago Pro quadrupeds, and if you buy four of them, they can carry you around in a chair!

I’m also really liking these videos from companies that are like, “We have a whole bunch of robot dogs now—what weird stuff can we do with them?”

[ Unitree Robotics ]

Why take a handful of pills every day for all the stuff that's wrong with you, when you could take one custom pill instead? Because custom pills are time-consuming to make, that’s why. But robots don’t care!

Multiply Labs’ factory is designed to operate in parallel. All the filling robots and all the quality-control robots are operating at the same time. The robotic arm, in the meanwhile, shuttles dozens of trays up and down the production floor, making sure that each capsule is filled with the right drugs. The manufacturing cell shown in this article can produce 10,000 personalized capsules in an 8-hour shift. A single cell occupies just 128 square feet (12 square meters) on the production floor. This means that a regular production facility (~10,000 square feet, or 929 m2 ) can house 78 cells, for an overall output of 780,000 capsules per shift. This exceeds the output of most traditional manufacturers—while producing unique personalized capsules!

[ Multiply Labs ]

Thanks Fred!

If you’re getting tired of all those annoying drones that sound like giant bees, just have a listen to this turbine-powered one:

[ Malloy Aeronautics ]

In retrospect, it’s kind of amazing that nobody has bothered to put a functional robotic dog head on a quadruped robot before this, right?

Equipped with sensors, high-tech radar imaging, cameras and a directional microphone, this 100-pound (45-kilogram) super-robot is still a “puppy-in-training.” Just like a regular dog, he responds to commands such as “sit,” “stand,” and “lie down.” Eventually, he will be able to understand and respond to hand signals, detect different colors, comprehend many languages, coordinate his efforts with drones, distinguish human faces, and even recognize other dogs.

As an information scout, Astro’s key missions will include detecting guns, explosives and gun residue to assist police, the military, and security personnel. This robodog’s talents won’t just end there, he also can be programmed to assist as a service dog for the visually impaired or to provide medical diagnostic monitoring. The MPCR team also is training Astro to serve as a first responder for search-and-rescue missions such as hurricane reconnaissance as well as military maneuvers.

[ FAU ]

And now this amazing video, “The Coke Thief,” from ICRA 2005 (!):

[ Paper ]

CYBATHLON Series put the focus on one or two of the six disciplines and are organized in cooperation with international universities and partners. The CYBATHLON Arm and Leg Prosthesis Series took place in Karlsruhe, Germany, from 16 to 18 May and was organized in cooperation with the Karlsruhe Institute of Technology (KIT) and the trade fair REHAB Karlsruhe.

The CYBATHLON Wheelchair Series took place in Kawasaki, Japan on 5 May 2019 and was organized in cooperation with the CYBATHLON Wheelchair Series Japan Organizing Committee and supported by the Swiss Embassy.

[ Cybathlon ]

Rainbow crepe robot!

There’s also this other robot, which I assume does something besides what's in the video, because otherwise it appears to be a massively overengineered way of shaping cooked rice into a chubby triangle.

[ PC Watch ]

The Weaponized Plastic Fighting League at Fetch Robotics has had another season of shardation, deintegration, explodification, and other -tions. Here are a couple fan favorite match videos:

[ Fetch Robotics ]

This video is in German, but it’s worth watching for the three seconds of extremely satisfying footage showing a robot twisting dough into pretzels.

[ Festo ]

Putting brains into farming equipment is a no-brainer, since it’s a semi-structured environment that's generally clear of wayward humans driving other vehicles.

[ Lovol ]

Thanks Fan!

Watch some robots assemble suspiciously Lego-like (but definitely not actually Lego) minifigs.

[ DevLinks ]

The Robotics Innovation Facility (RIFBristol) helps businesses, entrepreneurs, researchers and public sector bodies to embrace the concept of ‘Industry 4.0'. From training your staff in robotics, and demonstrating how automation can improve your manufacturing processes, to prototyping and validating your new innovations—we can provide the support you need.

[ RIF ]

Ryan Gariepy from Clearpath Robotics (and a bunch of other stuff) gave a talk at ICRA with the title of “Move Fast and (Don’t) Break Things: Commercializing Robotics at the Speed of Venture Capital,” which is more interesting when you know that this year’s theme was “Notable Failures.”

[ Clearpath Robotics ]

In this week’s episode of Robots in Depth, Per interviews Michael Nielsen, a computer vision researcher at the Danish Technological Institute.

Michael worked with a fusion of sensors like stereo vision, thermography, radar, lidar and high-frame-rate cameras, merging multiple images for high dynamic range. All this, to be able to navigate the tricky situation in a farm field where you need to navigate close to or even in what is grown. Multibaseline cameras were also used to provide range detection over a wide range of distances.

We also learn about how he expanded his work into sorting recycling, a very challenging problem. We also hear about the problems faced when using time of flight and sheet of light cameras. He then shares some good results using stereo vision, especially combined with blue light random dot projectors.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435619 Video Friday: Watch This Robot Dog ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, CA, USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Team PLUTO (University of Pennsylvania, Ghost Robotics, and Exyn Technologies) put together this video giving us a robot’s-eye-view (or whatever they happen to be using for eyes) of the DARPA Subterranean Challenge tunnel circuits.

[ PLUTO ]

Zhifeng Huang has been improving his jet-stepping humanoid robot, which features new hardware and the ability to take larger and more complex steps.

This video reported the last progress of an ongoing project utilizing ducted-fan propulsion system to improve humanoid robot’s ability in stepping over large ditches. The landing point of the robot’s swing foot can be not only forward but also side direction. With keeping quasi-static balance, the robot was able to step over a ditch with 450mm in width (up to 97% of the robot’s leg’s length) in 3D stepping.

[ Paper ]

Thanks Zhifeng!

These underacuated hands from Matei Ciocarlie’s lab at Columbia are magically able to reconfigure themselves to grasp different object types with just one or two motors.

[ Paper ] via [ ROAM Lab ]

This is one reason we should pursue not “autonomous cars” but “fully autonomous cars” that never require humans to take over. We can’t be trusted.

During our early days as the Google self-driving car project, we invited some employees to test our vehicles on their commutes and weekend trips. What we were testing at the time was similar to the highway driver assist features that are now available on cars today, where the car takes over the boring parts of the driving, but if something outside its ability occurs, the driver has to take over immediately.

What we saw was that our testers put too much trust in that technology. They were doing things like texting, applying makeup, and even falling asleep that made it clear they would not be ready to take over driving if the vehicle asked them to. This is why we believe that nothing short of full autonomy will do.

[ Waymo ]

Buddy is a DIY and fetchingly minimalist social robot (of sorts) that will be coming to Kickstarter this month.

We have created a new arduino kit. His name is Buddy. He is a DIY social robot to serve as a replacement for Jibo, Cozmo, or any of the other bots that are no longer available. Fully 3D printed and supported he adds much more to our series of Arduino STEM robotics kits.

Buddy is able to look around and map his surroundings and react to changes within them. He can be surprised and he will always have a unique reaction to changes. The kit can be built very easily in less than an hour. It is even robust enough to take the abuse that kids can give it in a classroom.

[ Littlebots ]

The android Mindar, based on the Buddhist deity of mercy, preaches sermons at Kodaiji temple in Kyoto, and its human colleagues predict that with artificial intelligence it could one day acquire unlimited wisdom. Developed at a cost of almost $1 million (¥106 million) in a joint project between the Zen temple and robotics professor Hiroshi Ishiguro, the robot teaches about compassion and the dangers of desire, anger and ego.

[ Japan Times ]

I’m not sure whether it’s the sound or what, but this thing scares me for some reason.

[ BIRL ]

This gripper uses magnets as a sort of adjustable spring for dynamic stiffness control, which seems pretty clever.

[ Buffalo ]

What a package of medicine sees while being flown by drone from a hospital to a remote clinic in the Dominican Republic. The drone flew 11 km horizontally and 800 meters vertically, and I can’t even imagine what it would take to make that drive.

[ WeRobotics ]

My first ride in a fully autonomous car was at Stanford in 2009. I vividly remember getting in the back seat of a descendant of Junior, and watching the steering wheel turn by itself as the car executed a perfect parking maneuver. Ten years later, it’s still fun to watch other people have that experience.

[ Waymo ]

Flirtey, the pioneer of the commercial drone delivery industry, has unveiled the much-anticipated first video of its next-generation delivery drone, the Flirtey Eagle. The aircraft designer and manufacturer also unveiled the Flirtey Portal, a sophisticated take off and landing platform that enables scalable store-to-door operations; and an autonomous software platform that enables drones to deliver safely to homes.

[ Flirtey ]

EPFL scientists are developing new approaches for improved control of robotic hands – in particular for amputees – that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof-of-concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects.

[ EPFL ]

This video is a few years old, but we’ll take any excuse to watch the majestic sage-grouse be majestic in all their majesticness.

[ UC Davis ]

I like the idea of a game of soccer (or, football to you weirdos in the rest of the world) where the ball has a mind of its own.

[ Sphero ]

Looks like the whole delivery glider idea is really taking off! Or, you know, not taking off.

Weird that they didn’t show the landing, because it sure looked like it was going to plow into the side of the hill at full speed.

[ Yates ] via [ sUAS News ]

This video is from a 2018 paper, but it’s not like we ever get tired of seeing quadrupeds do stuff, right?

[ MIT ]

Founder and Head of Product, Ian Bernstein, and Head of Engineering, Morgan Bell, have been involved in the Misty project for years and they have learned a thing or two about building robots. Hear how and why Misty evolved into a robot development platform, learn what some of the earliest prototypes did (and why they didn’t work for what we envision), and take a deep dive into the technology decisions that form the Misty II platform.

[ Misty Robotics ]

Lex Fridman interviews Vijay Kumar on the Artifiical Intelligence Podcast.

[ AI Podcast ]

This week’s CMU RI Seminar is from Ross Knepper at Cornell, on Formalizing Teamwork in Human-Robot Interaction.

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

[ CMU RI ]

In this week’s episode of Robots in Depth, Per speaks with Julien Bourgeois about Claytronics, a project from Carnegie Mellon and Intel to develop “programmable matter.”

Julien started out as a computer scientist. He was always interested in robotics privately but then had the opportunity to get into micro robots when his lab was merged into the FEMTO-ST Institute. He later worked with Seth Copen Goldstein at Carnegie Mellon on the Claytronics project.

Julien shows an enlarged mock-up of the small robots that make up programmable matter, catoms, and speaks about how they are designed. Currently he is working on a unit that is one centimeter in diameter and he shows us the very small CPU that goes into that model.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots

#435583 Soft Self-Healing Materials for Robots ...

If there’s one thing we know about robots, it’s that they break. They break, like, literally all the time. The software breaks. The hardware breaks. The bits that you think could never, ever, ever possibly break end up breaking just when you need them not to break the most, and then you have to try to explain what happened to your advisor who’s been standing there watching your robot fail and then stay up all night fixing the thing that seriously was not supposed to break.

While most of this is just a fundamental characteristic of robots that can’t be helped, the European Commission is funding a project called SHERO (Self HEaling soft RObotics) to try and solve at least some of those physical robot breaking problems through the use of structural materials that can autonomously heal themselves over and over again.

SHERO is a three year, €3 million collaboration between Vrije Universiteit Brussel, University of Cambridge, École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI-Paris), and Swiss Federal Laboratories for Materials Science and Technology (Empa). As the name SHERO suggests, the goal of the project is to develop soft materials that can completely recover from the kinds of damage that robots are likely to suffer in day to day operations, as well as the occasional more extreme accident.

Most materials, especially soft materials, are fixable somehow, whether it’s with super glue or duct tape. But fixing things involves a human first identifying when they’re broken, and then performing a potentially skill, labor, time, and money intensive task. SHERO’s soft materials will, eventually, make this entire process autonomous, allowing robots to self-identify damage and initiate healing on their own.

Photos: SHERO Project

The damaged robot finger [top] can operate normally after healing itself.

How the self-healing material works
What these self-healing materials can do is really pretty amazing. The researchers are actually developing two different types—the first one heals itself when there’s an application of heat, either internally or externally, which gives some control over when and how the healing process starts. For example, if the robot is handling stuff that’s dirty, you’d want to get it cleaned up before healing it so that dirt doesn’t become embedded in the material. This could mean that the robot either takes itself to a heating station, or it could activate some kind of embedded heating mechanism to be more self-sufficient.

The second kind of self-healing material is autonomous, in that it will heal itself at room temperature without any additional input, and is probably more suitable for relatively minor scrapes and cracks. Here are some numbers about how well the healing works:

Autonomous self-healing polymers do not require heat. They can heal damage at room temperature. Developing soft robotic systems from autonomous self-healing polymers excludes the need of additional heating devices… The healing however takes some time. The healing efficiency after 3 days, 7 days and 14 days is respectively 62 percent, 91 percent and 97 percent.

This material was used to develop a healable soft pneumatic hand. Relevant large cuts can be healed entirely without the need of external heat stimulus. Depending on the size of the damage and even more on the location of damage, the healing takes only seconds or up to a week. Damage on locations on the actuator that are subjected to very small stresses during actuation was healed instantaneously. Larger damages, like cutting the actuator completely in half, took 7 days to heal. But even this severe damage could be healed completely without the need of any external stimulus.

Applications of self-healing robots
Both of these materials can be mixed together, and their mechanical properties can be customized so that the structure that they’re a part of can be tuned to move in different ways. The researchers also plan on introducing flexible conductive sensors into the material, which will help sense damage as well as providing position feedback for control systems. A lot of development will happen over the next few years, and for more details, we spoke with Bram Vanderborght at Vrije Universiteit in Brussels.

IEEE Spectrum: How easy or difficult or expensive is it to produce these materials? Will they add significant cost to robotic grippers?

Bram Vanderborght: They are definitely more expensive materials, but it’s also a matter of size of production. At the moment, we’ve made a few kilograms of the material (enough to make several demonstrators), and the price already dropped significantly from when we ordered 100 grams of the material in the first phase of the project. So probably the cost of the gripper will be higher [than a regular gripper], but you won’t need to replace the gripper as often as other grippers that need to be replaced due to wear, so it can be an advantage.

Moreover due to the method of 3D printing the material, the surface is smoother and airtight (so no post-processing is required to make it airtight). Also, the smooth surface is better to avoid contamination for food handling, for example.

In commercial or industrial applications, gradual fatigue seems to be a more common issue than more abrupt trauma like cuts. How well does the self-healing work to improve durability over long periods of time?

We did not test for gradual fatigue over very long times. But both macroscopic and microscopic damage can be healed. So hopefully it can provide an answer here as well.

Image: SHERO Project

After developing a self-healing robot gripper, the researchers plan to use similar materials to build parts that can be used as the skeleton of robots, allowing them to repair themselves on a regular basis.

How much does the self-healing capability restrict the material properties? What are the limits for softness or hardness or smoothness or other characteristics of the material?

Typically the mechanical properties of networked polymers are much better than thermoplastics. Our material is a networked polymer but in which the crosslinks are reversible. We can change quite a lot of parameters in the design of the materials. So we can develop very stiff (fracture strain at 1.24 percent) and very elastic materials (fracture strain at 450 percent). The big advantage that our material has is we can mix it to have intermediate properties. Moreover, at the interface of the materials with different mechanical properties, we have the same chemical bonds, so the interface is perfect. While other materials, they may need to glue it, which gives local stresses and a weak spot.

When the material heals itself, is it less structurally sound in that spot? Can it heal damage that happens to the same spot over and over again?

In theory we can heal it an infinite amount of times. When the wound is not perfectly aligned, of course in that spot it will become weaker. Also too high temperatures lead to irreversible bonds, and impurities lead to weak spots.

Besides grippers and skins, what other potential robotics applications would this technology be useful for?

Most of self healing materials available now are used for coatings. What we are developing are structural components, therefore the mechanical properties of the material need to be good for such applications. So maybe part of the skeleton of the robot can be developed with such materials to make it lighter, since can be designed for regular repair. And for exceptional loads, it breaks and can be repaired like our human body.

[ SHERO Project ] Continue reading

Posted in Human Robots

#435522 Harvard’s Smart Exo-Shorts Talk to the ...

Exosuits don’t generally scream “fashionable” or “svelte.” Take the mind-controlled robotic exoskeleton that allowed a paraplegic man to kick off the World Cup back in 2014. Is it cool? Hell yeah. Is it practical? Not so much.

Yapping about wearability might seem childish when the technology already helps people with impaired mobility move around dexterously. But the lesson of the ill-fated Google Glassholes, which includes an awkward dorky head tilt and an assuming voice command, clearly shows that wearable computer assistants can’t just work technologically—they have to look natural and allow the user to behave like as usual. They have to, in a sense, disappear.

To Dr. Jose Pons at the Legs + Walking Ability Lab in Chicago, exosuits need three main selling points to make it in the real world. One, they have to physically interact with their wearer and seamlessly deliver assistance when needed. Two, they should cognitively interact with the host to guide and control the robot at all times. Finally, they need to feel like a second skin—move with the user without adding too much extra mass or reducing mobility.

This week, a US-Korean collaboration delivered the whole shebang in a Lululemon-style skin-hugging package combined with a retro waist pack. The portable exosuit, weighing only 11 pounds, looks like a pair of spandex shorts but can support the wearer’s hip movement when needed. Unlike their predecessors, the shorts are embedded with sensors that let them know when the wearer is walking versus running by analyzing gait.

Switching between the two movement modes may not seem like much, but what naturally comes to our brains doesn’t translate directly to smart exosuits. “Walking and running have fundamentally different biomechanics, which makes developing devices that assist both gaits challenging,” the team said. Their algorithm, computed in the cloud, allows the wearer to easily switch between both, with the shorts providing appropriate hip support that makes the movement experience seamless.

To Pons, who was not involved in the research but wrote a perspective piece, the study is an exciting step towards future exosuits that will eventually disappear under the skin—that is, implanted neural interfaces to control robotic assistance or activate the user’s own muscles.

“It is realistic to think that we will witness, in the next several years…robust human-robot interfaces to command wearable robotics based on…the neural code of movement in humans,” he said.

A “Smart” Exosuit Hack
There are a few ways you can hack a human body to move with an exosuit. One is using implanted electrodes inside the brain or muscles to decipher movement intent. With heavy practice, a neural implant can help paralyzed people walk again or dexterously move external robotic arms. But because the technique requires surgery, it’s not an immediate sell for people who experience low mobility because of aging or low muscle tone.

The other approach is to look to biophysics. Rather than decoding neural signals that control movement, here the idea is to measure gait and other physical positions in space to decipher intent. As you can probably guess, accurately deciphering user intent isn’t easy, especially when the wearable tries to accommodate multiple gaits. But the gains are many: there’s no surgery involved, and the wearable is low in energy consumption.

Double Trouble
The authors decided to tackle an everyday situation. You’re walking to catch the train to work, realize you’re late, and immediately start sprinting.

That seemingly easy conversion hides a complex switch in biomechanics. When you walk, your legs act like an inverted pendulum that swing towards a dedicated center in a predictable way. When you run, however, the legs move more like a spring-loaded system, and the joints involved in the motion differ from a casual stroll. Engineering an assistive wearable for each is relatively simple; making one for both is exceedingly hard.

Led by Dr. Conor Walsh at Harvard University, the team started with an intuitive idea: assisted walking and running requires specialized “actuation” profiles tailored to both. When the user is moving in a way that doesn’t require assistance, the wearable needs to be out of the way so that it doesn’t restrict mobility. A quick analysis found that assisting hip extension has the largest impact, because it’s important to both gaits and doesn’t add mass to the lower legs.

Building on that insight, the team made a waist belt connected to two thigh wraps, similar to a climbing harness. Two electrical motors embedded inside the device connect the waist belt to other components through a pulley system to help the hip joints move. The whole contraption weighed about 11 lbs and didn’t obstruct natural movement.

Next, the team programmed two separate supporting profiles for walking and running. The goal was to reduce the “metabolic cost” for both movements, so that the wearer expends as little energy as needed. To switch between the two programs, they used a cloud-based classification algorithm to measure changes in energy fluctuation to figure out what mode—running or walking—the user is in.

Smart Booster
Initial trials on treadmills were highly positive. Six male volunteers with similar age and build donned the exosuit and either ran or walked on the treadmill at varying inclines. The algorithm performed perfectly at distinguishing between the two gaits in all conditions, even at steep angles.

An outdoor test with eight volunteers also proved the algorithm nearly perfect. Even on uneven terrain, only two steps out of all test trials were misclassified. In an additional trial on mud or snow, the algorithm performed just as well.

“The system allows the wearer to use their preferred gait for each speed,” the team said.

Software excellence translated to performance. A test found that the exosuit reduced the energy for walking by over nine percent and running by four percent. It may not sound like much, but the range of improvement is meaningful in athletic performance. Putting things into perspective, the team said, the metabolic rate reduction during walking is similar to taking 16 pounds off at the waist.

The Wearable Exosuit Revolution
The study’s lightweight exoshorts are hardly the only players in town. Back in 2017, SRI International’s spin-off, Superflex, engineered an Aura suit to support mobility in the elderly. The Aura used a different mechanism: rather than a pulley system, it incorporated a type of smart material that contracts in a manner similar to human muscles when zapped with electricity.

Embedded with a myriad of sensors for motion, accelerometers and gyroscopes, Aura’s smartness came from mini-computers that measure how fast the wearer is moving and track the user’s posture. The data were integrated and processed locally inside hexagon-shaped computing pods near the thighs and upper back. The pods also acted as the control center for sending electrical zaps to give the wearer a boost when needed.

Around the same time, a collaboration between Harvard’s Wyss Institute and ReWalk Robotics introduced a fabric-based wearable robot to assist a wearer’s legs for balance and movement. Meanwhile, a Swiss team coated normal fabric with electroactive material to weave soft, pliable artificial “muscles” that move with the skin.

Although health support is the current goal, the military is obviously interested in similar technologies to enhance soldiers’ physicality. Superflex’s Aura, for example, was originally inspired by technology born from DARPA’s Warrior Web Program, which aimed to reduce a soldier’s mechanical load.

That said, military gear has had a long history of trickling down to consumer use. Similar to the way camouflage, cargo pants, and GORE-TEX trickled down into the consumer ecosphere, it’s not hard to imagine your local Target eventually stocking intelligent exowear.

Image and Video Credit: Wyss Institute at Harvard University. Continue reading

Posted in Human Robots