Tag Archives: direct

#439089 Ingenuity’s Chief Pilot Explains How ...

On April 11, the Mars helicopter Ingenuity will take to the skies of Mars for the first time. It will do so fully autonomously, out of necessity—the time delay between Ingenuity’s pilots at the Jet Propulsion Laboratory and Jezero Crater on Mars makes manual or even supervisory control impossible. So the best that the folks at JPL can do is practice as much as they can in simulation, and then hope that the helicopter can handle everything on its own.

Here on Earth, simulation is a critical tool for many robotics applications, because it doesn’t rely on access to expensive hardware, is non-destructive, and can be run in parallel and at faster-than-real-time speeds to focus on solving specific problems. Once you think you’ve gotten everything figured out in simulation, you can always give it a try on the real robot and see how close you came. If it works in real life, great! And if not, well, you can tweak some stuff in the simulation and try again.

For the Mars helicopter, simulation is much more important, and much higher stakes. Testing the Mars helicopter under conditions matching what it’ll find on Mars is not physically possible on Earth. JPL has flown engineering models in Martian atmospheric conditions, and they’ve used an actuated tether to mimic Mars gravity, but there’s just no way to know what it’ll be like flying on Mars until they’ve actually flown on Mars. With that in mind, the Ingenuity team has been relying heavily on simulation, since that’s one of the best tools they have to prepare for their Martian flights. We talk with Ingenuity’s Chief Pilot, Håvard Grip, to learn how it all works.

Ingenuity Facts:
Body Size: a box of tissues

Brains: Qualcomm Snapdragon 801

Weight: 1.8 kilograms

Propulsion: Two 1.2m carbon fiber rotors

Navigation sensors: VGA camera, laser altimeter, inclinometer

Ingenuity is scheduled to make its first flight no earlier than April 11. Before liftoff, the Ingenuity team will conduct a variety of pre-flight checks, including verifying the responsiveness of the control system and spinning the blades up to full speed (2,537 rpm) without lifting off. If everything looks good, the first flight will consist of a 1 meter per second climb to 3 meters, 30 seconds of hover at 3 meters while rotating in place a bit, and then a descent to landing. If Ingenuity pulls this off, that will have made its entire mission a success. There will be more flights over the next few weeks, but all it takes is one to prove that autonomous helicopter flight on Mars is possible.

Last month, we spoke with Mars Helicopter Operations Lead Tim Canham about Ingenuity’s hardware, software, and autonomy, but we wanted to know more about how the Ingenuity team has been using simulation for everything from vehicle design to flight planning. To answer our questions, we talked with JPL’s Håvard Grip, who led the development of Ingenuity’s navigation and flight control systems. Grip also has the title of Ingenuity Chief Pilot, which is pretty awesome. He summarizes this role as “operating the flight control system to make the helicopter do what we want it to do.”

IEEE Spectrum: Can you tell me about the simulation environment that JPL uses for Ingenuity’s flight planning?

Håvard Grip: We developed a Mars helicopter simulation ourselves at JPL, based on a multi-body simulation framework that’s also developed at JPL, called DARTS/DSHELL. That's a system that has been in development at JPL for about 30 years now, and it's been used in a number of missions. And so we took that multibody simulation framework, and based on it we built our own Mars helicopter simulation, put together our own rotor model, our own aerodynamics models, and everything else that's needed in order to simulate a helicopter. We also had a lot of help from the rotorcraft experts at NASA Ames and NASA Langley.

Image: NASA/JPL

Ingenuity in JPL’s flight simulator.

Without being able to test on Mars, how much validation are you able to do of what you’re seeing in simulation?

We can do a fair amount, but it requires a lot of planning. When we made our first real prototype (with a full-size rotor that looked like what we were thinking of putting on Mars) we first spent a lot of time designing it and using simulation tools to guide that design, and when we were sufficiently confident that we were close enough, and that we understood enough about it, then we actually built the thing and designed a whole suite of tests in a vacuum chamber where where we could replicate Mars atmospheric conditions. And those tests were before we tried to fly the helicopter—they were specifically targeted at what we call system identification, which has to do with figuring out what the true properties, the true dynamics of a system are, compared to what we assumed in our models. So then we got to see how well our models did, and in the places where they needed adjustment, we could go back and do that.

The simulation work that we really started after that very first initial lift test, that’s what allowed us to unlock all of the secrets to building a helicopter that can fly on Mars.
—Håvard Grip, Ingenuity Chief Pilot

We did a lot of this kind of testing. It was a big campaign, in several stages. But there are of course things that you can't fully replicate, and you do depend on simulation to tie things together. For example, we can't truly replicate Martian gravity on Earth. We can replicate the atmosphere, but not the gravity, and so we have to do various things when we fly—either make the helicopter very light, or we have to help it a little bit by pulling up on it with a string to offload some of the weight. These things don't fully replicate what it will be like on Mars. We also can't simultaneously replicate the Mars aerodynamic environment and the physical and visual surroundings that the helicopter will be flying in. These are places where simulation tools definitely come in handy, with the ability to do full flight tests from A to B, with the helicopter taking off from the ground, running the flight software that it will be running on board, simulating the images that the navigation camera takes of the ground below as it flies, feeding that back into the flight software, and then controlling it.

To what extent can simulation really compensate for the kinds of physical testing that you can’t do on Earth?

It gives you a few different possibilities. We can take certain tests on Earth where we replicate key elements of the environment, like the atmosphere or the visual surroundings for example, and you can validate your simulation on those parameters that you can test on Earth. Then, you can combine those things in simulation, which gives you the ability to set up arbitrary scenarios and do lots and lots of tests. We can Monte Carlo things, we can do a flight a thousand times in a row, with small perturbations of various parameters and tease out what our sensitivities are to those things. And those are the kinds of things that you can't do with physical tests, both because you can't fully replicate the environment and also because of the resources that would be required to do the same thing a thousand times in a row.

Because there are limits to the physical testing we can do on Earth, there are elements where we know there's more uncertainty. On those aspects where the uncertainty is high, we tried to build in enough margin that we can handle a range of things. And simulation gives you the ability to then maybe play with those parameters, and put them at their outer limits, and test them beyond where the real parameters are going to be to make sure that you have robustness even in those extreme cases.

How do you make sure you’re not relying on simulation too much, especially since in some ways it’s your only option?

It’s about anchoring it in real data, and we’ve done a lot of that with our physical testing. I think what you’re referring to is making your simulation too perfect, and we’re careful to model the things that matter. For example, the simulated sensors that we use have realistic levels of simulated noise and bias in them, the navigation camera images have realistic levels of degradation, we have realistic disturbances from wind gusts. If you don’t properly account for those things, then you’re missing important details. So, we try to be as accurate as we can, and to capture that by overbounding in areas where we have a high degree of uncertainty.

What kinds of simulated challenges have you put the Mars helicopter through, and how do you decide how far to push those challenges?

One example is that we can simulate going over rougher terrain. We can push that, and see how far we can go and still have the helicopter behave the way that we want it to. Or we can inject levels of noise that maybe the real sensors don't see, but you want to just see how far you can push things and make sure that it's still robust.

Where we put the limits on this and what we consider to be realistic is often a challenge. We consider this on a case by case basis—if you have a sensor that you're dealing with, you try to do testing with it to characterize it and understand its performance as much as possible, and you build a level of confidence in it that allows you to find the proper balance.

When it comes to things like terrain roughness, it's a little bit of a different thing, because we're actually picking where we're flying the helicopter. We have made that choice, and we know what the terrain looks like around us, so we don’t have to wonder about that anymore.

Image: NASA/JPL-Caltech/University of Arizona

Satellite image of the Ingenuity flight area.

The way that we’re trying to approach this operationally is that we should be done with the engineering at this point. We’re not depending on going back and resimulating things, other than a few checks here and there.

Are there any examples of things you learned as part of the simulation process that resulted in changes to the hardware or mission?

You know, it’s been a journey. One of the early things that we discovered as part of modeling the helicopter was that the rotor dynamics were quite different for a helicopter on Mars, in particular with respect to how the rotor responds to the up and down bending of the blades because they’re not perfectly rigid. That motion is a very important influence on the overall flight dynamics of the helicopter, and what we discovered as we started modeling was that this motion is damped much less on Mars. Under-damped oscillatory things like that, you kind of figure might pose a control issue, and that is the case here: if you just naively design it as you might a helicopter on Earth, without taking this into account, you could have a system where the response to control inputs becomes very sluggish. So that required changes to the vehicle design from some of the very early concepts, and it led us to make a rotor that’s extremely light and rigid.

The design cycle for the Mars helicopter—it’s not like we could just build something and take it out to the back yard and try it and then come back and tweak it if it doesn’t work. It’s a much bigger effort to build something and develop a test program where you have to use a vacuum chamber to test it. So you really want to get as close as possible up front, on your first iteration, and not have to go back to the drawing board on the basic things.

So how close were you able to get on your first iteration of the helicopter design?

[This video shows] a very early demo which was done more or less just assuming that things were going to behave as they would on Earth, and that we’d be able to fly in a Martian atmosphere just spinning the rotor faster and having a very light helicopter. We were basically just trying to demonstrate that we could produce enough lift. You can see the helicopter hopping around, with someone trying to joystick it, but it turned out to be very hard to control. This was prior to doing any of the modeling that I talked about earlier. But once we started seriously focusing on the modeling and simulation, we then went on to build a prototype vehicle which had a full-size rotor that’s very close to the rotor that will be flying on Mars. One difference is that prototype had cyclic control only on the lower rotor, and later we added cyclic control on the upper rotor as well, and that decision was informed in large part by the work we did in simulation—we’d put in the kinds of disturbances that we thought we might see on Mars, and decided that we needed to have the extra control authority.

How much room do you think there is for improvement in simulation, and how could that help you in the future?

The tools that we have were definitely sufficient for doing the job that we needed to do in terms of building a helicopter that can fly on Mars. But simulation is a compute-intensive thing, and so I think there’s definitely room for higher fidelity simulation if you have the compute power to do so. For a future Mars helicopter, you could get some benefits by more closely coupling together high-fidelity aerodynamic models with larger multi-body models, and doing that in a fast way, where you can iterate quickly. There’s certainly more potential for optimizing things.

Photo: NASA/JPL-Caltech

Ingenuity preparing for flight.

Watching Ingenuity’s first flight take place will likely be much like watching the Perseverance landing—we’ll be able to follow along with the Ingenuity team while they send commands to the helicopter and receive data back, although the time delay will mean that any kind of direct control won’t be possible. If everything goes the way it’s supposed to, there will hopefully be some preliminary telemetry from Ingenuity saying so, but it sounds like we’ll likely have to wait until April 12 before we get pictures or video of the flight itself.

Because Mars doesn’t care what time it is on Earth, the flight will actually be taking place very early on April 12, with the JPL Mission Control livestream starting at 3:30 a.m. EDT (12:30 a.m. PDT). Details are here. Continue reading

Posted in Human Robots

#439042 How Scientists Used Ultrasound to Read ...

Thanks to neural implants, mind reading is no longer science fiction.

As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!

Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.

Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.

In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.

“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.

To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.

Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.

Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.

“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.

There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?

One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.

The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.

The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.

Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.

The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.

Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.

That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.

Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.

A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.

There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.

All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.

“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.

Image Credit: Free-Photos / Pixabay Continue reading

Posted in Human Robots

#439032 To Learn To Deal With Uncertainty, This ...

AI is endowing robots, autonomous vehicles and countless of other forms of tech with new abilities and levels of self-sufficiency. Yet these models faithfully “make decisions” based on whatever data is fed into them, which could have dangerous consequences. For instance, if an autonomous car is driving down a highway and the sensor picks up a confusing signal (e.g., a paint smudge that is incorrectly interpreted as a lane marking), this could cause the car to swerve into another lane unnecessarily.

But in the ever-evolving world of AI, researchers are developing new ways to address challenges like this. One group of researchers has devised a new algorithm that allows the AI model to account for uncertain data, which they describe in a study published February 15 in IEEE Transactions on Neural Networks and Learning Systems.

“While we would like robots to work seamlessly in the real world, the real world is full of uncertainty,” says Michael Everett, a post-doctoral associate at MIT who helped develop the new approach. “It's important for a system to be aware of what it knows and what it is unsure about, which has been a major challenge for modern AI.”

His team focused on a type of AI called reinforcement learning (RL), whereby the model tries to learn the “value” of taking each action in a given scenario through trial-and-error. They developed a secondary algorithm, called Certified Adversarial Robustness for deep RL (CARRL), that can be built on top of an existing RL model.

“Our key innovation is that rather than blindly trusting the measurements, as is done today [by AI models], our algorithm CARRL thinks through all possible measurements that could have been made, and makes a decision that considers the worst-case outcome,” explains Everett.

In their study, the researchers tested CARRL across several different tasks, including collision avoidance simulations and Atari pong. For younger readers who may not be familiar with it, Atari pong is a classic computer game whereby an electronic paddle is used to direct a ping pong on the screen. In the test scenario, CARRL helped move the paddle slightly higher or lower to compensate for the possibility that the ball could approach at a slightly different point than what the input data indicated. All the while, CARRL would try to ensure that the ball would make contact with at least some part of paddle.

Gif: MIT Aerospace Controls Laboratory

In a perfect world, the information that an AI model is fed would be accurate all the time and AI model will perform well (left). But in some cases, the AI may be given inaccurate data, causing it to miss its targets (middle). The new algorithm CARRL helps AIs account for uncertainty in its data inputs, yielding a better performance when relying on poor data (right).

Across all test scenarios, the RL model was better at compensating for potential inaccurate or “noisy” data with CARRL, than without CARRL.

But the results also show that, like with humans, too much self-doubt and uncertainty can be unhelpful. In the collision avoidance scenario, for example, indulging in too much uncertainty caused the main moving object in the simulation to avoid both the obstacle and its goal. “There is definitely a limit to how ‘skeptical’ the algorithm can be without becoming overly conservative,” Everett says.

This research was funded by Ford Motor Company, but Everett notes that it could be applicable under many other commercial applications requiring safety-aware AI, including aerospace, healthcare, or manufacturing domains.

“This work is a step toward my vision of creating ‘certifiable learning machines’—systems that can discover how to explore and perform in the real world on their own, while still having safety and robustness guarantees,” says Everett. “We'd like to bring CARRL into robotic hardware while continuing to explore the theoretical challenges at the interface of robotics and AI.” Continue reading

Posted in Human Robots

#439004 Video Friday: A Walking, Wheeling ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

This is a pretty terrible video, I think because it was harvested from WeChat, which is where Tencent decided to premiere its new quadruped robot.

Not bad, right? Its name is Max, it has a top speed of 25 kph thanks to its elbow wheels, and we know almost nothing else about it.

[ Tencent ]

Thanks Fan!

Can't bring yourself to mask-shame others? Build a robot to do it for you instead!

[ GitHub ]

Researchers at Georgia Tech have recently developed an entirely soft, long-stroke electromagnetic actuator using liquid metal, compliant magnetic composites, and silicone polymers. The robot was inspired by the motion of the Xenia coral, which pulses its polyps to circulate oxygen under water to promote photosynthesis.

In this work, power applied to soft coils generates an electromagnetic field, which causes the internal compliant magnet to move upward. This forces the squishy silicone linkages to convert linear to the rotational motion with an arclength of up to 42 mm with a bandwidth up to 30 Hz. This highly deformable, fast, and long-stroke actuator topology can be utilized for a variety of applications from biomimicry to fully-soft grasping to wearables applications.

[ Paper ] via [ Georgia Tech ]

Thanks Noah!

Jueying Mini Lite may look a little like a Boston Dynamics Spot, but according to DeepRobotics, its coloring is based on Bruce Lee's Kung Fu clothes.

[ DeepRobotics ]

Henrique writes, “I would like to share with you the supplementary video of our recent work accepted to ICRA 2021. The video features a quadruped and a full-size humanoid performing dynamic jumps, after a brief animated intro of what direct transcription is. Me and my colleagues have put a lot of hard work into this, and I am very proud of the results.”

Making big robots jump is definitely something to be proud of!

[ SLMC Edinburgh ]

Thanks Henrique!

The finals of the Powered Exoskeleton Race for Cybathlon Global 2020.

[ Cybathlon ]

Thanks Fan!

It's nice that every once in a while, the world can get excited about science and robots.

[ NASA ]

Playing the Imperial March over footage of an army of black quadrupeds may not be sending quite the right message.

[ Unitree ]

Kod*lab PhD students Abriana Stewart-Height, Diego Caporale and Wei-Hsi Chen, with former Kod*lab student Garrett Wenger were on set in the summer of 2019 to operate RHex for the filming of Lapsis, a first feature film by director and screenwriter Noah Hutton.

[ Kod*lab ]

In class 2.008, Design and Manufacturing II, mechanical engineering students at MIT learn the fundamental principles of manufacturing at scale by designing and producing their own yo-yos. Instructors stress the importance of sustainable practices in the global supply chain.

[ MIT ]

A short history of robotics, from ABB.

[ ABB ]

In this paper, we propose a whole-body planning framework that unifies dynamic locomotion and manipulation tasks by formulating a single multi-contact optimal control problem. This is demonstrated in a set of real hardware experiments done in free-motion, such as base or end-effector pose tracking, and while pushing/pulling a heavy resistive door. Robustness against model mismatches and external disturbances is also verified during these test cases.

[ Paper ]

This paper presents PANTHER, a real-time perception-aware (PA) trajectory planner in dynamic environments. PANTHER plans trajectories that avoid dynamic obstacles while also keeping them in the sensor field of view (FOV) and minimizing the blur to aid in object tracking.

Extensive hardware experiments in unknown dynamic environments with all the computation running onboard are presented, with velocities of up to 5.8 m/s, and with relative velocities (with respect to the obstacles) of up to 6.3 m/s. The only sensors used are an IMU, a forward-facing depth camera, and a downward-facing monocular camera.

[ MIT ]

With our SaaS solution, we enable robots to inspect industrial facilities. One of the robots our software supports, is the Boston Dynamics Spot robot. In this video we demonstrate how autonomous industrial inspection with the Boston Dynamics Spot Robot is performed with our teach and repeat solution.

[ Energy Robotics ]

In this week’s episode of Tech on Deck, learn about our first technology demonstration sent to Station: The Robotic Refueling Mission. This tech demo helped us develop the tools and techniques needed to robotically refuel a satellite in space, an important capability for space exploration.

[ NASA ]

At Covariant we are committed to research and development that will bring AI Robotics to the real world. As a part of this, we believe it's important to educate individuals on how these exciting innovations will make a positive, fundamental and global impact for years to come. In this presentation, our co-founder Pieter Abbeel breaks down his thoughts on the current state of play for AI robotics.

[ Covariant ]

How do you fly a helicopter on Mars? It takes Ingenuity and Perseverance. During this technology demo, Farah Alibay and Tim Canham will get into the details of how these craft will manage this incredible task.

[ NASA ]

Complex real-world environments continue to present significant challenges for fielding robotic teams, which often face expansive spatial scales, difficult and dynamic terrain, degraded environmental conditions, and severe communication constraints. Breakthrough technologies call for integrated solutions across autonomy, perception, networking, mobility, and human teaming thrusts. As such, the DARPA OFFSET program and the DARPA Subterranean Challenge seek novel approaches and new insights for discovering and demonstrating these innovative technologies, to help close critical gaps for robotic operations in complex urban and underground environments.

[ UPenn ] Continue reading

Posted in Human Robots

#438755 Soft Legged Robot Uses Pneumatic ...

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Generally, when people talk about soft robots, the robots are only mostly soft. There are some components that are very difficult to make soft, including pressure sources and the necessary electronics to direct that pressure between different soft actuators in a way that can be used for propulsion. What’s really cool about this robot is that researchers have managed to take a pressure source (either a single tether or an onboard CO2 cartridge) and direct it to four different legs, each with three different air chambers, using an oscillating three valve circuit made entirely of soft materials.

Photo: UCSD

The pneumatic circuit that powers and controls the soft quadruped.

The inspiration for this can be found in biology—natural organisms, including quadrupeds, use nervous system components called central pattern generators (CPGs) to prompt repetitive motions with limbs that are used for walking, flying, and swimming. This is obviously more complicated in some organisms than in others, and is typically mediated by sensory feedback, but the underlying structure of a CPG is basically just a repeating circuit that drives muscles in sequence to produce a stable, continuous gait. In this case, we’ve got pneumatic muscles being driven in opposing pairs, resulting in a diagonal couplet gait, where diagonally opposed limbs rotate forwards and backwards at the same time.

Diagram: Science Robotics

(J) Pneumatic logic circuit for rhythmic leg motion. A constant positive pressure source (P+) applied to three inverter components causes a high-pressure state to propagate around the circuit, with a delay at each inverter. While the input to one inverter is high, the attached actuator (i.e., A1, A2, or A3) is inflated. This sequence of high-pressure states causes each pair of legs of the robot to rotate in a direction determined by the pneumatic connections. (K) By reversing the sequence of activation of the pneumatic oscillator circuit, the attached actuators inflate in a new sequence (A1, A3, and A2), causing (L) the legs of the robot to rotate in reverse. (M) Schematic bottom view of the robot with the directions of leg motions indicated for forward walking.

Diagram: Science Robotics

Each of the valves acts as an inverter by switching the normally closed half (top) to open and the normally open half (bottom) to closed.

The circuit itself is made up of three bistable pneumatic valves connected by tubing that acts as a delay by providing resistance to the gas moving through it that can be adjusted by altering the tube’s length and inner diameter. Within the circuit, the movement of the pressurized gas acts as both a source of energy and as a signal, since wherever the pressure is in the circuit is where the legs are moving. The simplest circuit uses only three valves, and can keep the robot walking in one single direction, but more valves can add more complex leg control options. For example, the researchers were able to use seven valves to tune the phase offset of the gait, and even just one additional valve (albeit of a slightly more complex design) could enable reversal of the system, causing the robot to walk backwards in response to input from a soft sensor. And with another complex valve, a manual (tethered) controller could be used for omnidirectional movement.

This work has some similarities to the rover that JPL is developing to explore Venus—that rover isn’t a soft robot, of course, but it operates under similar constraints in that it can’t rely on conventional electronic systems for autonomous navigation or control. It turns out that there are plenty of clever ways to use mechanical (or in this case, pneumatic) intelligence to make robots with relatively complex autonomous behaviors, meaning that in the future, soft (or soft-ish) robots could find valuable roles in situations where using a non-compliant system is not a good option.

For more on why we should be so excited about soft robots and just how soft a soft robot needs to be, we spoke with Michael Tolley, who runs the Bioinspired Robotics and Design Lab at UCSD, and Dylan Drotman, the paper’s first author.

IEEE Spectrum: What can soft robots do for us that more rigid robotic designs can’t?

Michael Tolley: At the very highest level, one of the fundamental assumptions of robotics is that you have rigid bodies connected at joints, and all your motion happens at these joints. That's a really nice approach because it makes the math easy, frankly, and it simplifies control. But when you look around us in nature, even though animals do have bones and joints, the way we interact with the world is much more complicated than that simple story. I’m interested in where we can take advantage of material properties in robotics. If you look at robots that have to operate in very unknown environments, I think you can build in some of the intelligence for how to deal with those environments into the body of the robot itself. And that’s the category this work really falls under—it's about navigating the world.

Dylan Drotman: Walking through confined spaces is a good example. With the rigid legged robot, you would have to completely change the way that the legs move to walk through a confined space, while if you have flexible legs, like the robot in our paper, you can use relatively simple control strategies to squeeze through an area you wouldn’t be able to get through with a rigid system.

How smart can a soft robot get?

Drotman: Right now we have a sensor on the front that's connected through a fluidic transmission to a bistable valve that causes the robot to reverse. We could add other sensors around the robot to allow it to change direction whenever it runs into an obstacle to effectively make an electronics-free version of a Roomba.

Tolley: Stepping back a little bit from that, one could make an argument that we’re using basic memory elements to generate very basic signals. There’s nothing in principle that would stop someone from making a pneumatic computer—it’s just very complicated to make something that complex. I think you could build on this and do more intelligent decision making, but using this specific design and the components we’re using, it’s likely to be things that are more direct responses to the environment.

How well would robots like these scale down?

Drotman: At the moment we’re manufacturing these components by hand, so the idea would be to make something more like a printed circuit board instead, and looking at how the channel sizes and the valve design would affect the actuation properties. We’ll also be coming up with new circuits, and different designs for the circuits themselves.

Tolley: Down to centimeter or millimeter scale, I don’t think you’d have fundamental fluid flow problems. I think you’re going to be limited more by system design constraints. You’ll have to be able to locomote while carrying around your pressure source, and possibly some other components that are also still rigid. When you start to talk about really small scales, though, it's not as clear to me that you really need an intrinsically soft robot. If you think about insects, their structural geometry can make them behave like they’re soft, but they’re not intrinsically soft.

Should we be thinking about soft robots and compliant robots in the same way, or are they fundamentally different?

Tolley: There’s certainly a connection between the two. You could have a compliant robot that behaves in a very similar way to an intrinsically soft robot, or a robot made of intrinsically soft materials. At that point, it comes down to design and manufacturing and practical limitations on what you can make. I think when you get down to small scales, the two sort of get connected.

There was some interesting work several years ago on using explosions to power soft robots. Is that still a thing?

Tolley: One of the opportunities with soft robots is that with material compliance, you have the potential to store energy. I think there’s exciting potential there for rapid motion with a soft body. Combustion is one way of doing that with power coming from a chemical source all at once, but you could also use a relatively weak muscle that over time stores up energy in a soft body and then releases it.

Is it realistic to expect complete softness from soft robots, or will they likely always have rigid components because they have to store or generate and move pressurized gas somehow?

Tolley: If you look in nature, you do have soft pumps like the heart, but although it’s soft, it’s still relatively stiff. Like, if you grab a heart, it’s not totally squishy. I haven’t done it, but I’d imagine. If you have a container that you’re pressurizing, it has to be stiff enough to not just blow up like a balloon. Certainly pneumatics or hydraulics are not the only way to go for soft actuators; there has been some really nice work on smart muscles and smart materials like hydraulic electrostatic (HASEL) actuators. They seem promising, but all of these actuators have challenges. We’ve chosen to stick with pressurized pneumatics in the near term; longer term, I think you’ll start to see more of these smart material actuators become more practical.

Personally, I don’t have any problem with soft robots having some rigid components. Most animals on land have some rigid components, but they can still take advantage of being soft, so it’s probably going to be a combination. But I do also like the vision of making an entirely soft, squishy thing. Continue reading

Posted in Human Robots