Tag Archives: can

#439804 How Quantum Computers Can Be Used to ...

Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we’ve experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world.

Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers.

More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI.

Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design.

“Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.”

At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems.

This was the basis for Google’s groundbreaking display of “quantum supremacy” in 2019. The company’s researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world’s fastest supercomputer.

This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place.

This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered.

Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using.

Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium.

It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it’s possible to manipulate it using the same control technologies.

The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more.

But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today’s rudimentary quantum computers can help design future devices, they’ve opened the door to a virtuous cycle that could significantly speed innovation in this field.

Image Credit: Pete Linforth from Pixabay Continue reading

Posted in Human Robots

#439592 Robot Shows How Simple Swimming Can Be

Lots of robots use bioinspiration in their design. Humanoids, quadrupeds, snake robots—if an animal has figured out a clever way of doing something, odds are there's a robot that's tried to duplicate it. But animals are often just a little too clever for the robots that we build that try to mimic them, which is why researchers at
Swiss Federal Institute of Technology Lausanne in Switzerland (EPFL) are using robots to learn about how animals themselves do what they do. In a paper published today in Science Robotics, roboticists from EPFL's Biorobotics Laboratory introduce a robotic eel that leverages sensory feedback from the water it swims through to coordinate its motion without the need for central control, suggesting a path towards simpler, more robust mobile robots.

The robotic eel—called AgnathaX—is a descendant of
AmphiBot, which has been swimming around at EPFL for something like two decades. AmphiBot's elegant motion in the water has come from the equivalent what are called central pattern generators (CPGs), which are sequences of neural circuits (the biological kind) that generate the sort of rhythms that you see in eel-like animals that rely on oscillations to move. It's possible to replicate these biological circuits using newfangled electronic circuits and software, leading to the same kind of smooth (albeit robotic) motion in AmphiBot.

Biological researchers had pretty much decided that CPGs explained the extent of wiggly animal motion, until it was discovered you can chop an eel's spinal cord in half, and it'll somehow maintain its coordinated undulatory swimming performance. Which is kinda nuts, right? Obviously, something else must be going on, but trying to futz with eels to figure out exactly what it was isn't, I would guess, pleasant for either researchers or their test subjects, which is where the robots come in. We can't make robotic eels that are exactly like the real thing, but we can duplicate some of their sensing and control systems well enough to understand how they do what they do.

AgnathaX exhibits the same smooth motions as the original version of AmphiBot, but it does so without having to rely on centralized programming that would be the equivalent of a biological CPG. Instead, it uses skin sensors that can detect pressure changes in the water around it, a feature also found on actual eels. By hooking these pressure sensors up to AgnathaX's motorized segments, the robot can generate swimming motions even if its segments aren't connected with each other—without a centralized nervous system, in other words. This spontaneous syncing up of disconnected moving elements is called entrainment, and the best demo of it that I've seen is this one, using metronomes:

UCLA Physics

The reason why this isn't just neat but also useful is that it provides a secondary method of control for robots. If the centralized control system of your swimming robot gets busted, you can rely on this water pressure-mediated local control to generate a swimming motion. And there are applications for modular robots as well, since you can potentially create a swimming robot out of a bunch of different physically connected modules that don't even have to talk to each other.

For more details, we spoke with
Robin Thandiackal and Kamilo Melo at EPFL, first authors on the Science Robotics paper.

IEEE Spectrum: Why do you need a robot to do this kind of research?

Thandiackal and Melo: From a more general perspective, with this kind of research we learn and understand how a system works by building it. This then allows us to modify and investigate the different components and understand their contribution to the system as a whole.

In a more specific context, it is difficult to separate the different components of the nervous system with respect to locomotion within a live animal. The central components are especially difficult to remove, and this is where a robot or also a simulated model becomes useful. We used both in our study. The robot has the unique advantage of using it within the real physics of the water, whereas these dynamics are approximated in simulation. However, we are confident in our simulations too because we validated them against the robot.

How is the robot model likely to be different from real animals? What can't you figure out using the robot, and how much could the robot be upgraded to fill that gap?

Thandiackal and Melo: The robot is by no means an exact copy of a real animal, only a first approximation. Instead, from observing and previous knowledge of real animals, we were able to create a mathematical representation of the neuromechanical control in real animals, and we implemented this mathematical representation of locomotion control on the robot to create a model. As the robot interacts with the real physics of undulatory swimming, we put a great effort in informing our design with the morphological and physiological characteristics of the real animal. This for example accounts for the scaling, the morphology and aspect ratio of the robot with respect to undulatory animals, and the muscle model that we used to approximately represent the viscoelastic characteristics of real muscles with a rotational joint.

Upgrading the robot is not going to be making it more “biological.” Again, the robot is part of the model, not a copy of the real biology. For the sake of this project, the robot was sufficient, and only a few things were missing in our design. You can even add other types of sensors and use the same robot base. However, if we would like to improve our robot for the future, it would be interesting to collect other fluid information like the surrounding fluid speed simultaneously with the force sensing, or to measure hydrodynamic pressure directly. Finally, we aim to test our model of undulatory swimming using a robot with three-dimensional capabilities, something which we are currently working on.

Upgrading the robot is not going to be making it more “biological.” The robot is part of the model, not a copy of the real biology.

What aspects of the function of a nervous system to generate undulatory motion in water aren't redundant with the force feedback from motion that you describe?

Thandiackal and Melo: Apart from the generation of oscillations and intersegmental coupling, which we found can be redundantly generated by the force feedback, the central nervous system still provides unique higher level commands like steering to regulate swimming direction. These commands typically originate in the brain (supraspinal) and are at the same time influenced by sensory signals. In many fish the lateral line organ, which directly connects to the brain, helps to inform the brain, e.g., to maintain position (rheotaxis) under variable flow conditions.

How can this work lead to robots that are more resilient?

Thandiackal and Melo: Robots that have our complete control architecture, with both peripheral and central components, are remarkably fault-tolerant and robust against damage in their sensors, communication buses, and control circuits. In principle, the robot should have the same fault-tolerance as demonstrated in simulation, with the ability to swim despite missing sensors, broken communication bus, or broken local microcontroller. Our control architecture offers very graceful degradation of swimming ability (as opposed to catastrophic failure).

Why is this discovery potentially important for modular robots?

Thandiackal and Melo: We showed that undulatory swimming can emerge in a self-organized manner by incorporating local force feedback without explicit communication between modules. In principle, we could create swimming robots of different sizes by simply attaching independent modules in a chain (e.g., without a communication bus between them). This can be useful for the design of modular swimming units with a high degree of reconfigurability and robustness, e.g. for search and rescue missions or environmental monitoring. Furthermore, the custom-designed sensing units provide a new way of accurate force sensing in water along the entirety of the body. We therefore hope that such units can help swimming robots to navigate through flow perturbations and enable advanced maneuvers in unsteady flows. Continue reading

Posted in Human Robots

#439499 Why Robots Can’t Be Counted On to Find ...

On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.

It seems like robots should be ready to help with something like this. But they aren’t.

A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
JOE RAEDLE/GETTY IMAGES

The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.

What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.

To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.

According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.

Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.

And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”

Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.

Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”

The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:

The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.

“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”

Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
Photo: Center for Robot-Assisted Search and Rescue

The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”

Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading

Posted in Human Robots

#439447 Nothing Can Keep This Drone Down

When life knocks you down, you’ve got to get back up. Ladybugs take this advice seriously in the most literal sense. If caught on their backs, the insects are able to use their tough exterior wings, called elytra (of late made famous in the game Minecraft), to self-right themselves in just a fraction of a second.

Inspired by this approach, researchers have created self-righting drones with artificial elytra. Simulations and experiments show that the artificial elytra can not only help salvage fixed-wing drones from compromising positions, but also improve the aerodynamics of the vehicles during flight. The results are described in a study published July 9 in IEEE Robotics and Automation Letters.

Charalampos Vourtsis is a doctoral assistant at the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne in Switzerland who co-created the new design. He notes that beetles, including ladybugs, have existed for tens of millions of years. “Over that time, they have developed several survival mechanisms that we found to be a source of inspiration for applications in modern robotics,” he says.

His team was particularly intrigued by beetles’ elytra, which for ladybugs are their famous black-spotted, red exterior wing. Underneath the elytra is the hind wing, the semi-transparent appendage that’s actually used for flight.

When stuck on their backs, ladybugs use their elytra to stabilize themselves, and then thrust their legs or hind wings in order to pitch over and self-right. Vourtsis’ team designed Micro Aerial Vehicles (MAVs) that use a similar technique, but with actuators to provide the self-righting force. “Similar to the insect, the artificial elytra feature degrees of freedom that allow them to reorient the vehicle if it flips over or lands upside down,” explains Vourtsis.

The researchers created and tested artificial elytra of different lengths (11, 14 and 17 centimeters) and torques to determine the most effective combination for self-righting a fixed-wing drone. While torque had little impact on performance, the length of elytra was found to be influential.

On a flat, hard surface, the shorter elytra lengths yielded mixed results. However, the longer length was associated with a perfect success rate. The longer elytra were then tested on different inclines of 10°, 20° and 30°, and at different orientations. The drones used the elytra to self-right themselves in all scenarios, except for one position at the steepest incline.

The design was also tested on seven different terrains: pavement, course sand, fine sand, rocks, shells, wood chips and grass. The drones were able to self-right with a perfect success rate across all terrains, with the exception of grass and fine sand. Vourtsis notes that the current design was made from widely available materials and a simple scale model of the beetle’s elytra—but further optimization may help the drones self-right on these more difficult terrains.

As an added bonus, the elytra were found to add non-negligible lift during flight, which offsets their weight.

Vourtsis says his team hopes to benefit from other design features of the beetles’ elytra. “We are currently investigating elytra for protecting folding wings when the drone moves on the ground among bushes, stones, and other obstacles, just like beetles do,” explains Vourtsis. “That would enable drones to fly long distances with large, unfolded wings, and safely land and locomote in a compact format in narrow spaces.” Continue reading

Posted in Human Robots

#439432 Nothing Can Keep This Drone Down

When life knocks you down, you’ve got to get back up. Ladybugs take this advice seriously in the most literal sense. If caught on their backs, the insects are able to use their tough exterior wings, called elytra (of late made famous in the game Minecraft), to self-right themselves in just a fraction of a second.

Inspired by this approach, researchers have created self-righting drones with artificial elytra. Simulations and experiments show that the artificial elytra can not only help salvage fixed-wing drones from compromising positions, but also improve the aerodynamics of the vehicles during flight. The results are described in a study published July 9 in IEEE Robotics and Automation Letters.

Charalampos Vourtsis is a doctoral assistant at the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne in Switzerland who co-created the new design. He notes that beetles, including ladybugs, have existed for tens of millions of years. “Over that time, they have developed several survival mechanisms that we found to be a source of inspiration for applications in modern robotics,” he says.

His team was particularly intrigued by beetles’ elytra, which for ladybugs are their famous black-spotted, red exterior wing. Underneath the elytra is the hind wing, the semi-transparent appendage that’s actually used for flight.

When stuck on their backs, ladybugs use their elytra to stabilize themselves, and then thrust their legs or hind wings in order to pitch over and self-right. Vourtsis’ team designed Micro Aerial Vehicles (MAVs) that use a similar technique, but with actuators to provide the self-righting force. “Similar to the insect, the artificial elytra feature degrees of freedom that allow them to reorient the vehicle if it flips over or lands upside down,” explains Vourtsis.

The researchers created and tested artificial elytra of different lengths (11, 14 and 17 centimeters) and torques to determine the most effective combination for self-righting a fixed-wing drone. While torque had little impact on performance, the length of elytra was found to be influential.

On a flat, hard surface, the shorter elytra lengths yielded mixed results. However, the longer length was associated with a perfect success rate. The longer elytra were then tested on different inclines of 10°, 20° and 30°, and at different orientations. The drones used the elytra to self-right themselves in all scenarios, except for one position at the steepest incline.

The design was also tested on seven different terrains: pavement, course sand, fine sand, rocks, shells, wood chips and grass. The drones were able to self-right with a perfect success rate across all terrains, with the exception of grass and fine sand. Vourtsis notes that the current design was made from widely available materials and a simple scale model of the beetle’s elytra—but further optimization may help the drones self-right on these more difficult terrains.

As an added bonus, the elytra were found to add non-negligible lift during flight, which offsets their weight.

Vourtsis says his team hopes to benefit from other design features of the beetles’ elytra. “We are currently investigating elytra for protecting folding wings when the drone moves on the ground among bushes, stones, and other obstacles, just like beetles do,” explains Vourtsis. “That would enable drones to fly long distances with large, unfolded wings, and safely land and locomote in a compact format in narrow spaces.” Continue reading

Posted in Human Robots