Category Archives: Human Robots
#439465 Dextrous Robotics Wants To Move Boxes ...
Hype aside, there aren’t necessarily all that many areas where robots have the potential to step into an existing workflow and immediately provide a substantial amount of value. But one of the areas that we have seen several robotics companies jump into recently is box manipulation—specifically, using robots to unload boxes from the back of a truck, ideally significantly faster than a human. This is a good task for robots because it plays to their strengths: you can work in a semi-structured and usually predictable environment, speed, power, and precision are all valued highly, and it’s not a job that humans are particularly interested in or designed for.
One of the more novel approaches to this task comes from Dextrous Robotics, a Memphis TN-based startup led by Evan Drumwright. Drumwright was a professor at GWU before spending a few years at the Toyota Research Institute and then co-founding Dextrous in 2019 with an ex-student of his, Sam Zapolsky. The approach that they’ve come up with is to do box manipulation without any sort of suction, or really any sort of grippers at all. Instead, they’re using what can best be described as a pair of moving arms, each gripping a robotic chopstick.
We can pick up basically anything using chopsticks. If you're good with chopsticks, you can pick up individual grains of rice, and you can pick up things that are relatively large compared to the scale of the chopsticks. Your imagination is about the limit, so wouldn't it be cool if you had a robot that could manipulate things with chopsticks? —Evan Drumwright
It definitely is cool, but are there practical reasons why using chopsticks for box manipulation is a good idea? Of course there are! The nice thing about chopsticks is that they really can grip almost anything (even if you scale them up), making them especially valuable in constrained spaces where you’ve got large disparities in shapes and sizes and weights. They’re good for manipulation, too, able to nudge and reposition things with precision. And while Dextrous is initially focused on a trailer unloading task, having this extra manipulation capability will allow them to consider more difficult manipulation tasks in the future, like trailer loading, a task that necessarily happens just as often as unloading does but which is significantly more complicated to robot-ize.
Even though there are some clear advantages to Dextrous’ chopstick technique, there are disadvantages as well, and the biggest one is likely that it’s just a lot harder to use a manipulation technique like this. “The downside of the chopsticks approach is, as any human will tell you, you need some sophisticated control software to be able to operate,” Drumwright tells us. “But that’s part of what we bring to the game: not just a clever hardware design, but the software to operate it, too.”
Meanwhile, what we’ve seen so far from other companies in this space is pretty consistent use of suction systems for box handling. If you have a flat, non-permeable surface (as with most boxes), suction can work quickly and reliably and with a minimum of fancy planning. However, suction has limits form of manipulation, because it’s inherently so sticky, meaning that it can be difficult and/or time consuming to do anything with precision. Other issues with suction include its sensitivity to temperature and moisture, its propensity to ingest all the dirt it possibly can, and the fact that you need to design the suction array based on the biggest and heaviest things that you anticipate having to deal with. That last thing is a particular problem because if you also want to manipulate smaller objects, you’re left trying to do so with a suction array that’s way bigger than you’d like it to be. This is not to say that suction is inferior in all cases, and Drumwright readily admits that suction will probably prove to be a good option for some specific tasks. But chopstick manipulation, if they can get it to work, will be a lot more versatile.
Dextrous Robotics co-founders Evan Drumwright and Sam Zapolsky.
Photo: Dextrous Robotics
I think there's a reason that nature has given us hands. Nature knows how to design suction devices—bats have it, octopi have it, frogs have it—and yet we have hands. Why? Hands are a superior instrument. And so, that's why we've gone down this road. I personally believe, based on billions of years of evolution, that there's a reason that manipulation is superior and that that technology is going to win out. —Evan Drumwright
Part of Dextrous’ secret sauce is an emphasis on simulation. Hardware is hard, so ideally, you want to make one thing that just works the first time, rather than having to iterate over and over. Getting it perfect on the first try is probably unrealistic, but the better you can simulate things in advance, the closer you can get. “What we’ve been able to do is set up our entire planning perception and control system so that it looks exactly like it does when that code runs on the real robot,” says Drumwright. “When we run something on the simulated robot, it agrees with reality about 95 percent of the time, which is frankly unprecedented.” Using very high fidelity hardware modeling, a real time simulator, and software that can directly transfer between sim and real, Dextrous is able to confidently model how their system performs even on notoriously tricky things to simulate, like contact and stiction. The idea is that the end result will be a system that can be developed faster while performing more complex tasks better than other solutions.
We were also wondering why this system uses smooth round chopsticks rather than something a little bit grippier, like chopsticks with a square cross section, and maybe with some higher friction something on the inside surface. Drumwright explains that the advantage of the current design is that it’s symmetrical around its rotational axis, meaning that you only need five degrees of freedom to fully control it. “What that means practically is that things can get a whole lot simpler—the control algorithms get simpler, the inverse kinematics algorithms get simpler, and importantly the number of motors that we need to drive in the robot goes down.”
Simulated version of Dextrous Robotics’ hardware.
Screenshot: Dextrous Robotics
Dextrous took seed funding 18 months ago, and since then they’ve been working on both the software and hardware for their system as well as finding the time to score an NSF SBIR phase 1 grant. The above screenshot shows the simulation of the hardware they’re working towards (chopstick manipulators on two towers that can move laterally), while the Franka Panda arms are what they’re using to validate their software in the meantime. New hardware should be done imminently, and over the next year, Dextrous is looking forward to conducting paid pilots with real customers. Continue reading
#439461 Video Friday: Fluidic Fingers
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
This 3D printed hand uses fluidic circuits (which respond differently to different input pressures) to create a soft robotic hand that only needs one input source to actuate three fingers independently.
[ UMD ]
Thanks, Fan!
Nano quadcopters are ideal for gas source localization (GSL) as they are safe, agile and inexpensive. However, their extremely restricted sensors and computational resources make GSL a daunting challenge. In this work, we propose a novel bug algorithm named ‘Sniffy Bug’, which allows a fully autonomous swarm of gas-seeking nano quadcopters to localize a gas source in an unknown, cluttered and GPS-denied environments.
[ MAVLab ]
Large-scale aerial deployment of miniature sensors in tough environmental conditions requires a deployment device that is lightweight, robust and steerable. We present a novel samara-inspired autorotating craft that is capable of autorotating and diving.
[ Paper ]
Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently created a new algorithm to help a robot find efficient motion plans to ensure physical safety of its human counterpart. In this case, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility.
[ MIT CSAIL ]
Listening to the language here about SoftBank's Whiz cleaning robot, I’ve got some concerns.
My worry is that the value that the robot is adding here is mostly in perception of cleaning, rather than actually, you know, cleaning. Which is still value, and that’s fine, but whether it’s long term commercially viable is less certain.
[ SoftBank ]
This paper presents a novel method for multi-legged robots to probe and test the terrain for collapses using its legs while walking. The proposed method improves on existing terrain probing approaches, and integrates the probing action into a walking cycle. A follow the-leader strategy with a suitable gait and stance is presented and implemented on a hexapod robot.
[ CSIRO ]
Robotics researchers from NVIDIA and University of Southern California presented their work at the 2021 Robotics: Science and Systems (RSS) conference called DiSECt, the first differentiable simulator for robotic cutting. The simulator accurately predicts the forces acting on a knife as it presses and slices through natural soft materials, such as fruits and vegetables.
[ NVIDIA ]
These videos from Moley Robotics have too many cuts in them to properly judge how skilled the robot is, but as far as I know, it only cooks the “perfect” steak in the sense that it will cook a steak of a given weight for a given time.
[ Moley ]
Most hands are designed for general purpose, as it’s very tedious to make task-specific hands. Existing methods battle trade-offs between the complexity of designs critical for contact-rich tasks, and the practical constraints of manufacturing, and contact handling.
This led researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to create a new method to computationally optimize the shape and control of a robotic manipulator for a specific task. Their system uses software to manipulate the design, simulate the robot doing a task, and then provide an optimization score to assess the design and control.
[ MIT CSAIL ]
Drone Adventures maps wildlife in Namibia from above.
[ Drone Adventures ]
Some impressive electronics disassembly tasks using a planner that just unscrews things, shakes them, and sees whether it then needs to unscrew more things.
[ Imagine ]
The reconfigurable robot ReQuBiS can very well transition into biped, quadruped and snake configurations without the need of re-arranging modules, unlike most state-of-the-art models. Its design allows the robot to split into two agents to perform tasks in parallel for biped and snake mobility.
[ Paper ] via [ IvLabs ]
Thanks, Fan!
World Vision Kenya aims to improve the climate resilience of nine villages in Tana River County, sustainably manage the ecosystem and climate change, and restore the communities’ livelihoods by reseeding the hotspot areas with indigenous trees, covering at least 250 acres for every village. This can be challenging to achieve, considering the vast areas needing coverage. That’s why World Vision Kenya partnered with Kenya Flying Labs to help make this process faster, easier, and more efficient (and more fun!).
[ WeRobotics ]
Pieter Abbeel’s Robot Brains Podcast has started posting video versions of the episodes, if you’re into that sort of thing. There are interesting excerpts as well, a few of which we can share here.
[ Robot Brains ]
RSS took place this week with paper presentations, talks, Q&As, and more, but here are two of the keynotes that are definitely worth watching.
[ RSS 2021 ] Continue reading
#439455 AI and Robots Are a Minefield of ...
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”
First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.
However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings.
We judge robots’ or AI’s tasks in comparison to humans
If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.
I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.
Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated.
We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.
However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.
Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.
HYUNG TAEK YOON
Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.
This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.
Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning.
Robotic demonstrations still rely on algorithms set for specialized tasks in bounded situations.
So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”
Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process.
Humans process information qualitatively, and computers, quantitively
Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.
As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates.
There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough”
This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints.
Actually, we do not know what we are doing
Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.
We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.
HYUNG TAEK YOON
Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously.
My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.
Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.
Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.
Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.
Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs. Continue reading
#439451 12 Robotics Teams Will Hunt For ...
Last week, DARPA announced the twelve teams who will be competing in the Virtual Track of the DARPA Subterranean Challenge Finals, scheduled to take place in September in Louisville, KY. The robots and the environment may be virtual, but the prize money is very real, with $1.5 million of DARPA cash on the table for the teams who are able to find the most subterranean artifacts in the shortest amount of time.
You can check out the list of Virtual Track competitors here, but we’ll be paying particularly close attention to Team Coordinated Robotics and Team BARCS, who have been trading first and second place back and forth across the three previous competitions. But there are many other strong contenders, and since nearly a year will have passed between the Final and the previous Cave Circuit, there’s been plenty of time for all teams to have developed creative new ideas and improvements.
As a quick reminder, the SubT Final will include elements of tunnels, caves, and the urban underground. As before, teams will be using simulated models of real robots to explore the environment looking for artifacts (like injured survivors, cell phones, backpacks, and even hazardous gas), and they’ll have to manage things like austere navigation, degraded sensing and communication, dynamic obstacles, and rough terrain.
While we’re not sure exactly what the Virtual Track is going to look like, one of the exciting aspects of a virtual competition like this is how DARPA is not constrained by things like available physical space or funding. They could make a virtual course that incorporates the inside of the Egyptian pyramids, the Cheyenne Mountain military complex, and my basement, if they were so inclined. We are expecting a combination of the overall themes of the three previous virtual courses (tunnel, cave, and urban), but connected up somehow, and likely with a few surprises thrown in for good measure.
To some extent, the Virtual Track represents the best case scenario for SubT robots, in the sense that fewer things will just spontaneously go wrong. This is something of a compromise, since things very often spontaneously go wrong when you’re dealing with real robots in the real world. This is not to diminish the challenges of the Virtual Track in the least—even the virtual robots aren’t invincible, and their software will need to keep them from running into simulated walls or falling down simulated stairs. But as far as I know, the virtual robots will not experience damage during transport to the event, electronics shorting, motors burning out, emergency stop buttons being accidentally pressed, and that sort of thing. If anything, this makes the Virtual Track more exciting to watch, because you’re seeing teams of virtual robots on their absolute best behavior challenging each other primarily on the cleverness and efficiency of their programmers.
The other reason that the Virtual Track is more exciting is that unlike the Systems Track, there are no humans in the loop at all. Teams submit their software to DARPA, and then sit back and relax (or not) and watch their robots compete all by themselves in real time. This is a hugely ambitious way to do things, because a single human even a little bit in the loop can provide the kind of critical contextual world knowledge and mission perspective that robots often lack. A human in there somewhere is fine in the near to medium term, but full autonomy is the dream.
As for the Systems Track (which involves real robots on the physical course in Louisville), we’re not yet sure who all of the final competitors will be. The pandemic has made travel complicated, and some international teams aren’t yet sure whether they’ll be able to make it. Either way, we’ll be there at the end of September, when we’ll be able to watch both the Systems and Virtual Track teams compete for the SubT Final championship. Continue reading
#439447 Nothing Can Keep This Drone Down
When life knocks you down, you’ve got to get back up. Ladybugs take this advice seriously in the most literal sense. If caught on their backs, the insects are able to use their tough exterior wings, called elytra (of late made famous in the game Minecraft), to self-right themselves in just a fraction of a second.
Inspired by this approach, researchers have created self-righting drones with artificial elytra. Simulations and experiments show that the artificial elytra can not only help salvage fixed-wing drones from compromising positions, but also improve the aerodynamics of the vehicles during flight. The results are described in a study published July 9 in IEEE Robotics and Automation Letters.
Charalampos Vourtsis is a doctoral assistant at the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne in Switzerland who co-created the new design. He notes that beetles, including ladybugs, have existed for tens of millions of years. “Over that time, they have developed several survival mechanisms that we found to be a source of inspiration for applications in modern robotics,” he says.
His team was particularly intrigued by beetles’ elytra, which for ladybugs are their famous black-spotted, red exterior wing. Underneath the elytra is the hind wing, the semi-transparent appendage that’s actually used for flight.
When stuck on their backs, ladybugs use their elytra to stabilize themselves, and then thrust their legs or hind wings in order to pitch over and self-right. Vourtsis’ team designed Micro Aerial Vehicles (MAVs) that use a similar technique, but with actuators to provide the self-righting force. “Similar to the insect, the artificial elytra feature degrees of freedom that allow them to reorient the vehicle if it flips over or lands upside down,” explains Vourtsis.
The researchers created and tested artificial elytra of different lengths (11, 14 and 17 centimeters) and torques to determine the most effective combination for self-righting a fixed-wing drone. While torque had little impact on performance, the length of elytra was found to be influential.
On a flat, hard surface, the shorter elytra lengths yielded mixed results. However, the longer length was associated with a perfect success rate. The longer elytra were then tested on different inclines of 10°, 20° and 30°, and at different orientations. The drones used the elytra to self-right themselves in all scenarios, except for one position at the steepest incline.
The design was also tested on seven different terrains: pavement, course sand, fine sand, rocks, shells, wood chips and grass. The drones were able to self-right with a perfect success rate across all terrains, with the exception of grass and fine sand. Vourtsis notes that the current design was made from widely available materials and a simple scale model of the beetle’s elytra—but further optimization may help the drones self-right on these more difficult terrains.
As an added bonus, the elytra were found to add non-negligible lift during flight, which offsets their weight.
Vourtsis says his team hopes to benefit from other design features of the beetles’ elytra. “We are currently investigating elytra for protecting folding wings when the drone moves on the ground among bushes, stones, and other obstacles, just like beetles do,” explains Vourtsis. “That would enable drones to fly long distances with large, unfolded wings, and safely land and locomote in a compact format in narrow spaces.” Continue reading