Tag Archives: be

#439941 Flexible Monocopter Drone Can Be ...

It turns out that you don't need a lot of hardware to make a flying robot. Flying robots are usually way, way, way over-engineered, with ridiculously over the top components like two whole wings or an obviously ludicrous four separate motors. Maybe that kind of stuff works for people with more funding than they know what to do with, but for anyone trying to keep to a reasonable budget, all it actually takes to make a flying robot is one single airfoil plus an attached fixed-pitch propeller. And if you make that airfoil flexible, you can even fold the entire thing up into a sort of flying robotic swiss roll.
This type of drone is called a monocopter, and the design is very generally based on samara seeds, which are those single-wing seed pods that spin down from maple trees. The ability to spin slows the seeds' descent to the ground, allowing them to spread farther from the tree. It's an inherently stable design, meaning that it'll spin all by itself and do so in a stable and predictable way, which is a nice feature for a drone to have—if everything completely dies, it'll just spin itself gently down to a landing by default.

The monocopter we're looking at here, called F-SAM, comes from the Singapore University of Technology & Design, and we've written about some of their flying robots in the past, including this transformable hovering rotorcraft. F-SAM stands for Foldable Single Actuator Monocopter, and as you might expect, it's a monocopter that can fold up and uses just one single actuator for control.

There may not be a lot going on here hardware-wise, but that's part of the charm of this design. The one actuator gives complete directional control: increasing the throttle increases the RPM of the aircraft, causing it to gain altitude, which is pretty straightforward. Directional control is trickier, but not much trickier, requiring repetitive pulsing of the motor at a point during the aircraft's spin when it's pointed in the direction you want it to go. F-SAM is operating in a motion-capture environment in the video to explore its potential for precision autonomy, but it's not restricted to that environment, and doesn't require external sensing for control.

While F-SAM's control board was custom designed and the wing requires some fabrication, the rest of the parts are cheap and off the shelf. The total weight of F-SAM is just 69g, of which nearly 40% is battery, yielding a flight time of about 16 minutes. If you look closely, you'll also see a teeny little carbon fiber leg of sorts that keeps the prop up above the ground, enabling the ground takeoff behavior without contacting the ground.
You can find the entire F-SAM paper open access here, but we also asked the authors a couple of extra questions.
IEEE Spectrum: It looks like you explored different materials and combinations of materials for the flexible wing structure. Why did you end up with this mix of balsa wood and plastic?

Shane Kyi Hla Win: The wing structure of a monocopter requires rigidity in order to be controllable in flight. Although it is possible for the monocopter to fly with more flexible materials we tested, such as flexible plastic or polymide flex, they allow the wing to twist freely mid-flight making cyclic control effort from the motor less effective. The balsa laminated with plastic provides enough rigidity for an effective control, while allowing folding in a pre-determined triangular fold.
Can F-SAM fly outdoors? What is required to fly it outside of a motion capture environment?
Yes it can fly outdoors. It is passively stable so it does not require a closed-loop control for its flight. The motion capture environment provides its absolute position for station-holding and waypoint flights when indoors. For outdoor flight, an electronic compass provides the relative heading for the basic cyclic control. We are working on a prototype with an integrated GPS for outdoor autonomous flights.
Would you be able to add a camera or other sensors to F-SAM?
A camera can be added (we have done this before), but due to its spinning nature, images captured can come out blurry. 360 cameras are becoming lighter and smaller and we may try putting one on F-SAM or other monocopters we have. Other possible sensors to include are LiDAR sensor or ToF sensor. With LiDAR, the platform has an advantage as it is already spinning at a known RPM. A conventional LiDAR system requires a dedicated actuator to create a spinning motion. As a rotating platform, F-SAM already possesses the natural spinning dynamics, hence making LiDAR integration lightweight and more efficient.
Your paper says that “in the future, we may look into possible launching of F-SAM directly from the container, without the need for human intervention.” Can you describe how this would happen?
Currently, F-SAM can be folded into a compact form and stored inside a container. However, it still requires a human to unfold it and either hand-launch it or put it on the floor to fly off. In the future, we envision that F-SAM is put inside a container which has the mechanism (such as pressured gas) to catapult the folded unit into the air, which can begin unfolding immediately due to elastic materials used. The motor can initiate the spin which allows the wing to straighten out due to centrifugal forces.
Do you think F-SAM would make a good consumer drone?
F-SAM could be a good toy but it may not be a good alternative to quadcopters if the objective is conventional aerial photography or videography. However, it can be a good contender for single-use GPS-guided reconnaissance missions. As it uses only one actuator for its flight, it can be made relatively cheaply. It is also very silent during its flight and easily camouflaged once landed. Various lightweight sensors can be integrated onto the platform for different types of missions, such as climate monitoring. F-SAM units can be deployed from the air, as they can also autorotate on their way down, while also flying at certain periods for extended meteorological data collection in the air.
What are you working on next?
We have a few exciting projects on hand, most of which focus on 'do more with less' theme. This means our projects aim to achieve multiple missions and flight modes while using as few actuators as possible. Like F-SAM which uses only one actuator to achieve controllable flight, another project we are working on is the fully autorotating version, named Samara Autorotating Wing (SAW). This platform, published earlier this year in IEEE Transactions on Robotics , is able to achieve two flight modes (autorotation and diving) with just one actuator. It is ideal for deploying single-use sensors to remote locations. For example, we can use the platform to deploy sensors for forest monitoring or wildfire alert system. The sensors can land on tree canopies, and once landed the wing provides the necessary area for capturing solar energy for persistent operation over several years. Another interesting scenario is using the autorotating platform to guide the radiosondes back to the collection point once its journey upwards is completed. Currently, many radiosondes are sent up with hydrogen balloons from weather stations all across the world (more than 20,000 annually from Australia alone) and once the balloon reaches a high altitude and bursts, the sensors drop back onto the earth and no effort is spent to retrieve these sensors. By guiding these sensors back to a collection point, millions of dollars can be saved every year—and also [it helps] save the environment by polluting less. Continue reading

Posted in Human Robots

#439804 How Quantum Computers Can Be Used to ...

Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we’ve experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world.

Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers.

More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI.

Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design.

“Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.”

At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems.

This was the basis for Google’s groundbreaking display of “quantum supremacy” in 2019. The company’s researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world’s fastest supercomputer.

This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place.

This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered.

Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using.

Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium.

It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it’s possible to manipulate it using the same control technologies.

The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more.

But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today’s rudimentary quantum computers can help design future devices, they’ve opened the door to a virtuous cycle that could significantly speed innovation in this field.

Image Credit: Pete Linforth from Pixabay Continue reading

Posted in Human Robots

#439592 Robot Shows How Simple Swimming Can Be

Lots of robots use bioinspiration in their design. Humanoids, quadrupeds, snake robots—if an animal has figured out a clever way of doing something, odds are there's a robot that's tried to duplicate it. But animals are often just a little too clever for the robots that we build that try to mimic them, which is why researchers at
Swiss Federal Institute of Technology Lausanne in Switzerland (EPFL) are using robots to learn about how animals themselves do what they do. In a paper published today in Science Robotics, roboticists from EPFL's Biorobotics Laboratory introduce a robotic eel that leverages sensory feedback from the water it swims through to coordinate its motion without the need for central control, suggesting a path towards simpler, more robust mobile robots.

The robotic eel—called AgnathaX—is a descendant of
AmphiBot, which has been swimming around at EPFL for something like two decades. AmphiBot's elegant motion in the water has come from the equivalent what are called central pattern generators (CPGs), which are sequences of neural circuits (the biological kind) that generate the sort of rhythms that you see in eel-like animals that rely on oscillations to move. It's possible to replicate these biological circuits using newfangled electronic circuits and software, leading to the same kind of smooth (albeit robotic) motion in AmphiBot.

Biological researchers had pretty much decided that CPGs explained the extent of wiggly animal motion, until it was discovered you can chop an eel's spinal cord in half, and it'll somehow maintain its coordinated undulatory swimming performance. Which is kinda nuts, right? Obviously, something else must be going on, but trying to futz with eels to figure out exactly what it was isn't, I would guess, pleasant for either researchers or their test subjects, which is where the robots come in. We can't make robotic eels that are exactly like the real thing, but we can duplicate some of their sensing and control systems well enough to understand how they do what they do.

AgnathaX exhibits the same smooth motions as the original version of AmphiBot, but it does so without having to rely on centralized programming that would be the equivalent of a biological CPG. Instead, it uses skin sensors that can detect pressure changes in the water around it, a feature also found on actual eels. By hooking these pressure sensors up to AgnathaX's motorized segments, the robot can generate swimming motions even if its segments aren't connected with each other—without a centralized nervous system, in other words. This spontaneous syncing up of disconnected moving elements is called entrainment, and the best demo of it that I've seen is this one, using metronomes:

UCLA Physics

The reason why this isn't just neat but also useful is that it provides a secondary method of control for robots. If the centralized control system of your swimming robot gets busted, you can rely on this water pressure-mediated local control to generate a swimming motion. And there are applications for modular robots as well, since you can potentially create a swimming robot out of a bunch of different physically connected modules that don't even have to talk to each other.

For more details, we spoke with
Robin Thandiackal and Kamilo Melo at EPFL, first authors on the Science Robotics paper.

IEEE Spectrum: Why do you need a robot to do this kind of research?

Thandiackal and Melo: From a more general perspective, with this kind of research we learn and understand how a system works by building it. This then allows us to modify and investigate the different components and understand their contribution to the system as a whole.

In a more specific context, it is difficult to separate the different components of the nervous system with respect to locomotion within a live animal. The central components are especially difficult to remove, and this is where a robot or also a simulated model becomes useful. We used both in our study. The robot has the unique advantage of using it within the real physics of the water, whereas these dynamics are approximated in simulation. However, we are confident in our simulations too because we validated them against the robot.

How is the robot model likely to be different from real animals? What can't you figure out using the robot, and how much could the robot be upgraded to fill that gap?

Thandiackal and Melo: The robot is by no means an exact copy of a real animal, only a first approximation. Instead, from observing and previous knowledge of real animals, we were able to create a mathematical representation of the neuromechanical control in real animals, and we implemented this mathematical representation of locomotion control on the robot to create a model. As the robot interacts with the real physics of undulatory swimming, we put a great effort in informing our design with the morphological and physiological characteristics of the real animal. This for example accounts for the scaling, the morphology and aspect ratio of the robot with respect to undulatory animals, and the muscle model that we used to approximately represent the viscoelastic characteristics of real muscles with a rotational joint.

Upgrading the robot is not going to be making it more “biological.” Again, the robot is part of the model, not a copy of the real biology. For the sake of this project, the robot was sufficient, and only a few things were missing in our design. You can even add other types of sensors and use the same robot base. However, if we would like to improve our robot for the future, it would be interesting to collect other fluid information like the surrounding fluid speed simultaneously with the force sensing, or to measure hydrodynamic pressure directly. Finally, we aim to test our model of undulatory swimming using a robot with three-dimensional capabilities, something which we are currently working on.

Upgrading the robot is not going to be making it more “biological.” The robot is part of the model, not a copy of the real biology.

What aspects of the function of a nervous system to generate undulatory motion in water aren't redundant with the force feedback from motion that you describe?

Thandiackal and Melo: Apart from the generation of oscillations and intersegmental coupling, which we found can be redundantly generated by the force feedback, the central nervous system still provides unique higher level commands like steering to regulate swimming direction. These commands typically originate in the brain (supraspinal) and are at the same time influenced by sensory signals. In many fish the lateral line organ, which directly connects to the brain, helps to inform the brain, e.g., to maintain position (rheotaxis) under variable flow conditions.

How can this work lead to robots that are more resilient?

Thandiackal and Melo: Robots that have our complete control architecture, with both peripheral and central components, are remarkably fault-tolerant and robust against damage in their sensors, communication buses, and control circuits. In principle, the robot should have the same fault-tolerance as demonstrated in simulation, with the ability to swim despite missing sensors, broken communication bus, or broken local microcontroller. Our control architecture offers very graceful degradation of swimming ability (as opposed to catastrophic failure).

Why is this discovery potentially important for modular robots?

Thandiackal and Melo: We showed that undulatory swimming can emerge in a self-organized manner by incorporating local force feedback without explicit communication between modules. In principle, we could create swimming robots of different sizes by simply attaching independent modules in a chain (e.g., without a communication bus between them). This can be useful for the design of modular swimming units with a high degree of reconfigurability and robustness, e.g. for search and rescue missions or environmental monitoring. Furthermore, the custom-designed sensing units provide a new way of accurate force sensing in water along the entirety of the body. We therefore hope that such units can help swimming robots to navigate through flow perturbations and enable advanced maneuvers in unsteady flows. Continue reading

Posted in Human Robots

#439532 Lethal Autonomous Weapons Exist; They ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya's Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”
In so many words, the red line of autonomous targeting of humans has now been crossed.
To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever.
The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it's a Slaughterbot.
The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system.

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.
A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”
The “Slaughterbots” Question
In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre's Dec. 2017 IEEE Spectrum article “Why You Shouldn't Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.
We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video's epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.
The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.
“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”
While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.
Azerbaijan's decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan's success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles.
If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.
We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn't. Continue reading

Posted in Human Robots

#439499 Why Robots Can’t Be Counted On to Find ...

On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.

It seems like robots should be ready to help with something like this. But they aren’t.

A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
JOE RAEDLE/GETTY IMAGES

The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.

What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.

To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.

According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.

Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.

And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”

Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.

Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”

The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:

The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.

“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”

Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
Photo: Center for Robot-Assisted Search and Rescue

The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”

Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading

Posted in Human Robots