Tag Archives: simple
#439628 How a Simple Crystal Could Help Pave the ...
Vaccine and drug development, artificial intelligence, transport and logistics, climate science—these are all areas that stand to be transformed by the development of a full-scale quantum computer. And there has been explosive growth in quantum computing investment over the past decade.
Yet current quantum processors are relatively small in scale, with fewer than 100 qubits— the basic building blocks of a quantum computer. Bits are the smallest unit of information in computing, and the term qubits stems from “quantum bits.”
While early quantum processors have been crucial for demonstrating the potential of quantum computing, realizing globally significant applications will likely require processors with upwards of a million qubits.
Our new research tackles a core problem at the heart of scaling up quantum computers: how do we go from controlling just a few qubits, to controlling millions? In research published today in Science Advances, we reveal a new technology that may offer a solution.
What Exactly Is a Quantum Computer?
Quantum computers use qubits to hold and process quantum information. Unlike the bits of information in classical computers, qubits make use of the quantum properties of nature, known as “superposition” and “entanglement,” to perform some calculations much faster than their classical counterparts.
Unlike a classical bit, which is represented by either 0 or 1, a qubit can exist in two states (that is, 0 and 1) at the same time. This is what we refer to as a superposition state.
Demonstrations by Google and others have shown even current, early-stage quantum computers can outperform the most powerful supercomputers on the planet for a highly specialized (albeit not particularly useful) task—reaching a milestone we call quantum supremacy.
Google’s quantum computer, built from superconducting electrical circuits, had just 53 qubits and was cooled to a temperature close to -273℃ in a high-tech refrigerator. This extreme temperature is needed to remove heat, which can introduce errors to the fragile qubits. While such demonstrations are important, the challenge now is to build quantum processors with many more qubits.
Major efforts are underway at UNSW Sydney to make quantum computers from the same material used in everyday computer chips: silicon. A conventional silicon chip is thumbnail-sized and packs in several billion bits, so the prospect of using this technology to build a quantum computer is compelling.
The Control Problem
In silicon quantum processors, information is stored in individual electrons, which are trapped beneath small electrodes at the chip’s surface. Specifically, the qubit is coded into the electron’s spin. It can be pictured as a small compass inside the electron. The needle of the compass can point north or south, which represents the 0 and 1 states.
To set a qubit in a superposition state (both 0 and 1), an operation that occurs in all quantum computations, a control signal must be directed to the desired qubit. For qubits in silicon, this control signal is in the form of a microwave field, much like the ones used to carry phone calls over a 5G network. The microwaves interact with the electron and cause its spin (compass needle) to rotate.
Currently, each qubit requires its own microwave control field. It is delivered to the quantum chip through a cable running from room temperature down to the bottom of the refrigerator at close to -273 degrees Celsius. Each cable brings heat with it, which must be removed before it reaches the quantum processor.
At around 50 qubits, which is state-of-the-art today, this is difficult but manageable. Current refrigerator technology can cope with the cable heat load. However, it represents a huge hurdle if we’re to use systems with a million qubits or more.
The Solution Is ‘Global’ Control
An elegant solution to the challenge of how to deliver control signals to millions of spin qubits was proposed in the late 1990s. The idea of “global control” was simple: broadcast a single microwave control field across the entire quantum processor.
Voltage pulses can be applied locally to qubit electrodes to make the individual qubits interact with the global field (and produce superposition states).
It’s much easier to generate such voltage pulses on-chip than it is to generate multiple microwave fields. The solution requires only a single control cable and removes obtrusive on-chip microwave control circuitry.
For more than two decades global control in quantum computers remained an idea. Researchers could not devise a suitable technology that could be integrated with a quantum chip and generate microwave fields at suitably low powers.
In our work we show that a component known as a dielectric resonator could finally allow this. The dielectric resonator is a small, transparent crystal which traps microwaves for a short period of time.
The trapping of microwaves, a phenomenon known as resonance, allows them to interact with the spin qubits longer and greatly reduces the power of microwaves needed to generate the control field. This was vital to operating the technology inside the refrigerator.
In our experiment, we used the dielectric resonator to generate a control field over an area that could contain up to four million qubits. The quantum chip used in this demonstration was a device with two qubits. We were able to show the microwaves produced by the crystal could flip the spin state of each one.
The Path to a Full-Scale Quantum Computer
There is still work to be done before this technology is up to the task of controlling a million qubits. For our study, we managed to flip the state of the qubits, but not yet produce arbitrary superposition states.
Experiments are ongoing to demonstrate this critical capability. We’ll also need to further study the impact of the dielectric resonator on other aspects of the quantum processor.
That said, we believe these engineering challenges will ultimately be surmountable— clearing one of the greatest hurdles to realizing a large-scale spin-based quantum computer.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Serwan Asaad/UNSW, Author provided Continue reading
#439592 Robot Shows How Simple Swimming Can Be
Lots of robots use bioinspiration in their design. Humanoids, quadrupeds, snake robots—if an animal has figured out a clever way of doing something, odds are there's a robot that's tried to duplicate it. But animals are often just a little too clever for the robots that we build that try to mimic them, which is why researchers at
Swiss Federal Institute of Technology Lausanne in Switzerland (EPFL) are using robots to learn about how animals themselves do what they do. In a paper published today in Science Robotics, roboticists from EPFL's Biorobotics Laboratory introduce a robotic eel that leverages sensory feedback from the water it swims through to coordinate its motion without the need for central control, suggesting a path towards simpler, more robust mobile robots.
The robotic eel—called AgnathaX—is a descendant of
AmphiBot, which has been swimming around at EPFL for something like two decades. AmphiBot's elegant motion in the water has come from the equivalent what are called central pattern generators (CPGs), which are sequences of neural circuits (the biological kind) that generate the sort of rhythms that you see in eel-like animals that rely on oscillations to move. It's possible to replicate these biological circuits using newfangled electronic circuits and software, leading to the same kind of smooth (albeit robotic) motion in AmphiBot.
Biological researchers had pretty much decided that CPGs explained the extent of wiggly animal motion, until it was discovered you can chop an eel's spinal cord in half, and it'll somehow maintain its coordinated undulatory swimming performance. Which is kinda nuts, right? Obviously, something else must be going on, but trying to futz with eels to figure out exactly what it was isn't, I would guess, pleasant for either researchers or their test subjects, which is where the robots come in. We can't make robotic eels that are exactly like the real thing, but we can duplicate some of their sensing and control systems well enough to understand how they do what they do.
AgnathaX exhibits the same smooth motions as the original version of AmphiBot, but it does so without having to rely on centralized programming that would be the equivalent of a biological CPG. Instead, it uses skin sensors that can detect pressure changes in the water around it, a feature also found on actual eels. By hooking these pressure sensors up to AgnathaX's motorized segments, the robot can generate swimming motions even if its segments aren't connected with each other—without a centralized nervous system, in other words. This spontaneous syncing up of disconnected moving elements is called entrainment, and the best demo of it that I've seen is this one, using metronomes:
UCLA Physics
The reason why this isn't just neat but also useful is that it provides a secondary method of control for robots. If the centralized control system of your swimming robot gets busted, you can rely on this water pressure-mediated local control to generate a swimming motion. And there are applications for modular robots as well, since you can potentially create a swimming robot out of a bunch of different physically connected modules that don't even have to talk to each other.
For more details, we spoke with
Robin Thandiackal and Kamilo Melo at EPFL, first authors on the Science Robotics paper.
IEEE Spectrum: Why do you need a robot to do this kind of research?
Thandiackal and Melo: From a more general perspective, with this kind of research we learn and understand how a system works by building it. This then allows us to modify and investigate the different components and understand their contribution to the system as a whole.
In a more specific context, it is difficult to separate the different components of the nervous system with respect to locomotion within a live animal. The central components are especially difficult to remove, and this is where a robot or also a simulated model becomes useful. We used both in our study. The robot has the unique advantage of using it within the real physics of the water, whereas these dynamics are approximated in simulation. However, we are confident in our simulations too because we validated them against the robot.
How is the robot model likely to be different from real animals? What can't you figure out using the robot, and how much could the robot be upgraded to fill that gap?
Thandiackal and Melo: The robot is by no means an exact copy of a real animal, only a first approximation. Instead, from observing and previous knowledge of real animals, we were able to create a mathematical representation of the neuromechanical control in real animals, and we implemented this mathematical representation of locomotion control on the robot to create a model. As the robot interacts with the real physics of undulatory swimming, we put a great effort in informing our design with the morphological and physiological characteristics of the real animal. This for example accounts for the scaling, the morphology and aspect ratio of the robot with respect to undulatory animals, and the muscle model that we used to approximately represent the viscoelastic characteristics of real muscles with a rotational joint.
Upgrading the robot is not going to be making it more “biological.” Again, the robot is part of the model, not a copy of the real biology. For the sake of this project, the robot was sufficient, and only a few things were missing in our design. You can even add other types of sensors and use the same robot base. However, if we would like to improve our robot for the future, it would be interesting to collect other fluid information like the surrounding fluid speed simultaneously with the force sensing, or to measure hydrodynamic pressure directly. Finally, we aim to test our model of undulatory swimming using a robot with three-dimensional capabilities, something which we are currently working on.
Upgrading the robot is not going to be making it more “biological.” The robot is part of the model, not a copy of the real biology.
What aspects of the function of a nervous system to generate undulatory motion in water aren't redundant with the force feedback from motion that you describe?
Thandiackal and Melo: Apart from the generation of oscillations and intersegmental coupling, which we found can be redundantly generated by the force feedback, the central nervous system still provides unique higher level commands like steering to regulate swimming direction. These commands typically originate in the brain (supraspinal) and are at the same time influenced by sensory signals. In many fish the lateral line organ, which directly connects to the brain, helps to inform the brain, e.g., to maintain position (rheotaxis) under variable flow conditions.
How can this work lead to robots that are more resilient?
Thandiackal and Melo: Robots that have our complete control architecture, with both peripheral and central components, are remarkably fault-tolerant and robust against damage in their sensors, communication buses, and control circuits. In principle, the robot should have the same fault-tolerance as demonstrated in simulation, with the ability to swim despite missing sensors, broken communication bus, or broken local microcontroller. Our control architecture offers very graceful degradation of swimming ability (as opposed to catastrophic failure).
Why is this discovery potentially important for modular robots?
Thandiackal and Melo: We showed that undulatory swimming can emerge in a self-organized manner by incorporating local force feedback without explicit communication between modules. In principle, we could create swimming robots of different sizes by simply attaching independent modules in a chain (e.g., without a communication bus between them). This can be useful for the design of modular swimming units with a high degree of reconfigurability and robustness, e.g. for search and rescue missions or environmental monitoring. Furthermore, the custom-designed sensing units provide a new way of accurate force sensing in water along the entirety of the body. We therefore hope that such units can help swimming robots to navigate through flow perturbations and enable advanced maneuvers in unsteady flows. Continue reading
#439053 Bipedal Robots Are Learning To Move With ...
Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.
Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.
It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.
As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time.
Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.
IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?
Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.
Why is this an important problem for bipedal humanoid robots?
Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.
How can LOLA decide whether a surface is suitable for multi-contact locomotion?
LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.
Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.
Is LOLA capable of both proactive and reactive multi-contact locomotion?
The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.
What are some examples of unique capabilities that you are working towards with LOLA?
One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.
Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard.
A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.
While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.
We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too. Continue reading
#438982 Quantum Computing and Reinforcement ...
Deep reinforcement learning is having a superstar moment.
Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.
Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.
But what if you could try multiple solutions at once?
This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.
The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.
Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.
The Bottleneck
Learning from trial and error comes intuitively to our brains.
Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)
Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.
“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.
Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.
The Hybrid AI
The new study went straight for that previously untenable jugular.
The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.
At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.
This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.
“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.
It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.
The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.
The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.
In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”
A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.
The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.
Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.
“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.
Image Credit: Oleg Gamulinskiy from Pixabay Continue reading