Tag Archives: sense

#439066 Video Friday: Festo’s BionicSwift

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Festo's Bionic Learning Network for 2021 presents a flock of BionicSwifts.

To execute the flight maneuvers as true to life as possible, the wings are modeled on the plumage of birds. The individual lamellae are made of an ultralight, flexible but very robust foam and lie on top of each other like shingles. Connected to a carbon quill, they are attached to the actual hand and arm wings as in the natural model.

During the wing upstroke, the individual lamellae fan out so that air can flow through the wing. This means that the birds need less force to pull the wing up. During the downstroke, the lamellae close up so that the birds can generate more power to fly. Due to this close-to-nature replica of the wings, the BionicSwifts have a better flight profile than previous wing-beating drives.

[ Festo ]

While we've seen a wide variety of COVID-motivated disinfecting robots, they're usually using either ultraviolet light or a chemical fog. This isn't the way that humans clean—we wipe stuff down, which gets rid of surface dirt and disinfects at the same time. Fraunhofer has been working on a mobile manipulator that can clean in the same ways that we do.

It's quite the technical challenge, but it has the potential to be both more efficient and more effective.

[ Fraunhofer ]

In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says. The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view.

[ MIT ]

Ingenuity is now scheduled to fly on April 11.

[ JPL ]

The legendary Zenta is back after a two year YouTube hiatus with “a kind of freaky furry hexapod bunny creature.”

[ Zenta ]

It is with great pride and excitement that the South Australia Police announce a new expansion to their kennel by introducing three new Police Dog (PD) recruits. These dogs have been purposely targeted to bring a whole new range of dog operational capabilities known as the ‘small area urban search and guided evacuation’ dogs. Police have been working closely with specialist vets and dog trainers to ascertain if the lightweight dogs could be transported safely by drones and released into hard-to-access areas where at the moment the larger PDs just simply cannot get in due to their size.

[ SA Police ]

SoftBank may not have Spot cheerleading robots for their baseball team anymore, but they've more than made up for it with a full century of Peppers. And one dude doing the robot.

[ SoftBank ]

MAB Robotics is a Polish company developing walking robots for inspection, and here's a prototype they've been working on.

[ MAB Robotics ]

Thanks Jakub!

DoraNose: Smell your way to a better tomorrow.

[ Dorabot ]

Our robots need to learn how to cope with their new neighbors, and we have just the solution for this, the egg detector! Using cutting-edge AI, it provides incredible precision in detecting a vast variety of eggs. We have deployed this new feature on Boston Dynamics Spot, one of our fleet's robots. It can now detect eggs with its cameras and avoid them on his autonomous missions.

[ Energy Robotics ]

When dropping a squishy robot from an airplane 1,000 feet up, make sure that you land as close to people's cars as you can.

Now do it from orbit!

[ Squishy Robotics ]

An autonomous robot that is able to physically guide humans through narrow and cluttered spaces could be a big boon to the visually-impaired. Most prior robotic guiding systems are based on wheeled platforms with large bases with actuated rigid guiding canes. The large bases and the actuated arms limit these prior approaches from operating in narrow and cluttered environments. We propose a method that introduces a quadrupedal robot with a leash to enable the robot-guiding-human system to change its intrinsic dimension (by letting the leash go slack) in order to fit into narrow spaces.

[ Hybrid Robotics ]

How to prove that your drone is waterproof.

[ UNL ]

Well this ought to be pretty good once it gets out of simulation.

[ Hybrid Robotics ]

MIDAS is Aurora’s AI-enabled, multi-rotor sUAV outfitted with optical sensors and a customized payload that can defeat multiple small UAVs per flight with low-collateral effects.

[ Aurora ]

The robots​ of the DFKI have the advantage of being able to reach extreme environments: they can be used for decontamination purposes in high-risk areas or inspect and maintain underwater​ structures, for which they are tested in the North Sea near Heligoland​.

[ DFKI ]

After years of trying, 60 Minutes cameras finally get a peek inside the workshop at Boston Dynamics, where robots move in ways once only thought possible in movies. Anderson Cooper reports.

[ 60 Minutes ]

In 2007, Noel Sharky stated that “we are sleepwalking into a brave new world where robots decide who, where and when to kill.” Since then thousands of AI and robotics researchers have joined his calls to regulate “killer robots.” But sometime this year, Turkey will deploy fully autonomous home-built kamikaze drones on its border with Syria. What are the ethical choices we need to consider? Will we end up in an episode of Black Mirror? Or is the UN listening to calls and starting the process of regulating this space? Prof. Toby Walsh will discuss this important issue, consider where we are at and where we need to go.

[ ICRA 2020 ]

In the second session of HAI's spring conference, artists and technologists discussed how technology can enhance creativity, reimagine meaning, and support racial and social justice. The conference, called “Intelligence Augmentation: AI Empowering People to Solve Global Challenges,” took place on 25 March 2021.

[ Stanford HAI ]

This spring 2021 GRASP SFI comes from Monroe Kennedy III at Stanford University, on “Considerations for Human-Robot Collaboration.”

The field of robotics has evolved over the past few decades. We’ve seen robots progress from the automation of repetitive tasks in manufacturing to the autonomy of mobilizing in unstructured environments to the cooperation of swarm robots that are centralized or decentralized. These abilities have required advances in robotic hardware, modeling, and artificial intelligence. The next frontier is robots collaborating in complex tasks with human teammates, in environments traditionally configured for humans. While solutions to this challenge must utilize all the advances of robotics, the human element adds a unique aspect that must be addressed. Collaborating with a human teammate means that the robot must have a contextual understanding of the task as well as all participant’s roles. We will discuss what constitutes an effective teammate and how we can capture this behavior in a robotic collaborator.

[ UPenn ] Continue reading

Posted in Human Robots

#439053 Bipedal Robots Are Learning To Move With ...

Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.

Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.

It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.

As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time.

Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.

IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?

Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.

Why is this an important problem for bipedal humanoid robots?

Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.

How can LOLA decide whether a surface is suitable for multi-contact locomotion?

LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.

Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.

Is LOLA capable of both proactive and reactive multi-contact locomotion?

The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.

What are some examples of unique capabilities that you are working towards with LOLA?

One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.

Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard.

A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.

While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.

We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too. Continue reading

Posted in Human Robots

#439023 In ‘Klara and the Sun,’ We Glimpse ...

In a store in the center of an unnamed city, humanoid robots are displayed alongside housewares and magazines. They watch the fast-moving world outside the window, anxiously awaiting the arrival of customers who might buy them and take them home. Among them is Klara, a particularly astute robot who loves the sun and wants to learn as much as possible about humans and the world they live in.

So begins Kazuo Ishiguro’s new novel Klara and the Sun, published earlier this month. The book, told from Klara’s perspective, portrays an eerie future society in which intelligent machines and other advanced technologies have been integrated into daily life, but not everyone is happy about it.

Technological unemployment, the progress of artificial intelligence, inequality, the safety and ethics of gene editing, increasing loneliness and isolation—all of which we’re grappling with today—show up in Ishiguro’s world. It’s like he hit a fast-forward button, mirroring back to us how things might play out if we don’t approach these technologies with caution and foresight.

The wealthy genetically edit or “lift” their children to set them up for success, while the poor have to make do with the regular old brains and bodies bequeathed them by evolution. Lifted and unlifted kids generally don’t mix, and this is just one of many sinister delineations between a new breed of haves and have-nots.

There’s anger about robots’ steady infiltration into everyday life, and questions about how similar their rights should be to those of humans. “First they take the jobs. Then they take the seats at the theater?” one woman fumes.

References to “changes” and “substitutions” allude to an economy where automation has eliminated millions of jobs. While “post-employed” people squat in abandoned buildings and fringe communities arm themselves in preparation for conflict, those whose livelihoods haven’t been destroyed can afford to have live-in housekeepers and buy Artificial Friends (or AFs) for their lonely children.

“The old traditional model that we still live with now—where most of us can get some kind of paid work in exchange for our services or the goods we make—has broken down,” Ishiguro said in a podcast discussion of the novel. “We’re not talking just about the difference between rich and poor getting bigger. We’re talking about a gap appearing between people who participate in society in an obvious way and people who do not.”

He has a point; as much as techno-optimists claim that the economic changes brought by automation and AI will give us all more free time, let us work less, and devote time to our passion projects, how would that actually play out? What would millions of “post-employed” people receiving basic income actually do with their time and energy?

In the novel, we don’t get much of a glimpse of this side of the equation, but we do see how the wealthy live. After a long wait, just as the store manager seems ready to give up on selling her, Klara is chosen by a 14-year-old girl named Josie, the daughter of a woman who wears “high-rank clothes” and lives in a large, sunny home outside the city. Cheerful and kind, Josie suffers from an unspecified illness that periodically flares up and leaves her confined to her bed for days at a time.

Her life seems somewhat bleak, the need for an AF clear. In this future world, the children of the wealthy no longer go to school together, instead studying alone at home on their digital devices. “Interaction meetings” are set up for them to learn to socialize, their parents carefully eavesdropping from the next room and trying not to intervene when there’s conflict or hurt feelings.

Klara does her best to be a friend, aide, and confidante to Josie while continuing to learn about the world around her and decode the mysteries of human behavior. We surmise that she was programmed with a basic ability to understand emotions, which evolves along with her other types of intelligence. “I believe I have many feelings. The more I observe, the more feelings become available to me,” she explains to one character.

Ishiguro does an excellent job of representing Klara’s mind: a blend of pre-determined programming, observation, and continuous learning. Her narration has qualities both robotic and human; we can tell when something has been programmed in—she “Gives Privacy” to the humans around her when that’s appropriate, for example—and when she’s figured something out for herself.

But the author maintains some mystery around Klara’s inner emotional life. “Does she actually understand human emotions, or is she just observing human emotions and simulating them within herself?” he said. “I suppose the question comes back to, what are our emotions as human beings? What do they amount to?”

Klara is particularly attuned to human loneliness, since she essentially was made to help prevent it. It is, in her view, peoples’ biggest fear, and something they’ll go to great lengths to avoid, yet can never fully escape. “Perhaps all humans are lonely,” she says.

Warding off loneliness through technology isn’t a futuristic idea, it’s something we’ve been doing for a long time, with the technologies at hand growing more and more sophisticated. Products like AFs already exist. There’s XiaoIce, a chatbot that uses “sentiment analysis” to keep its 660 million users engaged, and Azuma Hikari, a character-based AI designed to “bring comfort” to users whose lives lack emotional connection with other humans.

The mere existence of these tools would be sinister if it wasn’t for their widespread adoption; when millions of people use AIs to fill a void in their lives, it raises deeper questions about our ability to connect with each other and whether technology is building it up or tearing it down.

This isn’t the only big question the novel tackles. An overarching theme is one we’ve been increasingly contemplating as computers start to acquire more complex capabilities, like the beginnings of creativity or emotional awareness: What is it that truly makes us human?

“Do you believe in the human heart?” one character asks. “I don’t mean simply the organ, obviously. I’m speaking in the poetic sense. The human heart. Do you think there is such a thing? Something that makes each of us special and individual?”

The alternative, at least in the story, is that people don’t have a unique essence, but rather we’re all a blend of traits and personalities that can be reduced to strings of code. Our understanding of the brain is still elementary, but at some level, doesn’t all human experience boil down to the firing of billions of neurons between our ears? Will we one day—in a future beyond that painted by Ishiguro, but certainly foreshadowed by it—be able to “decode” our humanity to the point that there’s nothing mysterious left about it? “A human heart is bound to be complex,” Klara says. “But it must be limited.”

Whether or not you agree, Klara and the Sun is worth the read. It’s both a marvelous, engaging story about what it means to love and be human, and a prescient warning to approach technological change with caution and nuance. We’re already living in a world where AI keeps us company, influences our behavior, and is wreaking various forms of havoc. Ishiguro’s novel is a snapshot of one of our possible futures, told through the eyes of a robot who keeps you rooting for her to the end.

Image Credit: Marion Wellmann from Pixabay Continue reading

Posted in Human Robots

#438998 Foam Sword Fencing With a PR2 Is the ...

Most of what we cover in the Human Robot Interaction (HRI) space involves collaboration, because collaborative interactions tend to be productive, positive, and happy. Yay! But sometimes, collaboration is not what you want. Sometimes, you want competition.

Competition between humans and robots doesn’t have to be a bad thing, in the same way that competition between humans and humans doesn’t have to be a bad thing. There are all kinds of scenarios in which humans respond favorably to competition, and exercise is an obvious example.

Studies have shown that humans can perform significantly better when they’re exercising competitively as opposed to when they’re exercising individually. And while researchers have looked at whether robots can be effective exercise coaches (they can be), there hasn’t been a lot of exploration of physical robots actually competing directly with humans. Roboticists from the University of Washington decided to put adversarial exercise robots to the test, and they did it by giving a PR2 a giant foam sword. Awesome.

This exercise game matches a PR2 with a human in a zero-sum competitive fencing game with foam swords. Expecting the PR2 to actually be a competitive fencer isn’t realistic because, like, it’s a PR2. Instead, the objective of the game is for the human to keep their foam sword within a target area near the PR2 while also avoiding the PR2’s low-key sword-waving. A VR system allows the user to see the target area, while also giving the system a way to track the user’s location and pose.

Looks like fun, right? It’s also exercise, at least in the sense that the user’s heart rate nearly doubled over their resting heart rate during the highest scoring game. This is super preliminary research, though, and there’s still a lot of work to do. It’ll be important to figure out how skilled a competitive robot should be in order to keep providing a reasonable challenge to a human who gradually improves over time, while also being careful to avoid generating any negative reactions. For example, the robot should probably not beat you over the head with its foam sword, even if that’s a highly effective strategy for getting your heart rate up.

Competitive Physical Human-Robot Game Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT, was presented as a late-breaking report at the ACM/IEEE International Conference on Human-Robot Interaction. Continue reading

Posted in Human Robots

#438982 Quantum Computing and Reinforcement ...

Deep reinforcement learning is having a superstar moment.

Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.

Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.

But what if you could try multiple solutions at once?

This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.

The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.

Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.

The Bottleneck
Learning from trial and error comes intuitively to our brains.

Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)

Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.

“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.

Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.

The Hybrid AI
The new study went straight for that previously untenable jugular.

The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.

At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.

This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.

“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.

It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.

The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.

The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.

In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”

A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.

The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.

Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.

“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.

Image Credit: Oleg Gamulinskiy from Pixabay Continue reading

Posted in Human Robots