Tag Archives: short

#438982 Quantum Computing and Reinforcement ...

Deep reinforcement learning is having a superstar moment.

Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.

Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.

But what if you could try multiple solutions at once?

This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.

The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.

Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.

The Bottleneck
Learning from trial and error comes intuitively to our brains.

Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)

Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.

“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.

Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.

The Hybrid AI
The new study went straight for that previously untenable jugular.

The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.

At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.

This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.

“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.

It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.

The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.

The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.

In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”

A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.

The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.

Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.

“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.

Image Credit: Oleg Gamulinskiy from Pixabay Continue reading

Posted in Human Robots

#438779 Meet Catfish Charlie, the CIA’s ...

Photo: CIA Museum

CIA roboticists designed Catfish Charlie to take water samples undetected. Why they wanted a spy fish for such a purpose remains classified.

In 1961, Tom Rogers of the Leo Burnett Agency created Charlie the Tuna, a jive-talking cartoon mascot and spokesfish for the StarKist brand. The popular ad campaign ran for several decades, and its catchphrase “Sorry, Charlie” quickly hooked itself in the American lexicon.

When the CIA’s Office of Advanced Technologies and Programs started conducting some fish-focused research in the 1990s, Charlie must have seemed like the perfect code name. Except that the CIA’s Charlie was a catfish. And it was a robot.

More precisely, Charlie was an unmanned underwater vehicle (UUV) designed to surreptitiously collect water samples. Its handler controlled the fish via a line-of-sight radio handset. Not much has been revealed about the fish’s construction except that its body contained a pressure hull, ballast system, and communications system, while its tail housed the propulsion. At 61 centimeters long, Charlie wouldn’t set any biggest-fish records. (Some species of catfish can grow to 2 meters.) Whether Charlie reeled in any useful intel is unknown, as details of its missions are still classified.

For exploring watery environments, nothing beats a robot
The CIA was far from alone in its pursuit of UUVs nor was it the first agency to do so. In the United States, such research began in earnest in the 1950s, with the U.S. Navy’s funding of technology for deep-sea rescue and salvage operations. Other projects looked at sea drones for surveillance and scientific data collection.

Aaron Marburg, a principal electrical and computer engineer who works on UUVs at the University of Washington’s Applied Physics Laboratory, notes that the world’s oceans are largely off-limits to crewed vessels. “The nature of the oceans is that we can only go there with robots,” he told me in a recent Zoom call. To explore those uncharted regions, he said, “we are forced to solve the technical problems and make the robots work.”

Image: Thomas Wells/Applied Physics Laboratory/University of Washington

An oil painting commemorates SPURV, a series of underwater research robots built by the University of Washington’s Applied Physics Lab. In nearly 400 deployments, no SPURVs were lost.

One of the earliest UUVs happens to sit in the hall outside Marburg’s office: the Self-Propelled Underwater Research Vehicle, or SPURV, developed at the applied physics lab beginning in the late ’50s. SPURV’s original purpose was to gather data on the physical properties of the sea, in particular temperature and sound velocity. Unlike Charlie, with its fishy exterior, SPURV had a utilitarian torpedo shape that was more in line with its mission. Just over 3 meters long, it could dive to 3,600 meters, had a top speed of 2.5 m/s, and operated for 5.5 hours on a battery pack. Data was recorded to magnetic tape and later transferred to a photosensitive paper strip recorder or other computer-compatible media and then plotted using an IBM 1130.

Over time, SPURV’s instrumentation grew more capable, and the scope of the project expanded. In one study, for example, SPURV carried a fluorometer to measure the dispersion of dye in the water, to support wake studies. The project was so successful that additional SPURVs were developed, eventually completing nearly 400 missions by the time it ended in 1979.

Working on underwater robots, Marburg says, means balancing technical risks and mission objectives against constraints on funding and other resources. Support for purely speculative research in this area is rare. The goal, then, is to build UUVs that are simple, effective, and reliable. “No one wants to write a report to their funders saying, ‘Sorry, the batteries died, and we lost our million-dollar robot fish in a current,’ ” Marburg says.

A robot fish called SoFi
Since SPURV, there have been many other unmanned underwater vehicles, of various shapes and sizes and for various missions, developed in the United States and elsewhere. UUVs and their autonomous cousins, AUVs, are now routinely used for scientific research, education, and surveillance.

At least a few of these robots have been fish-inspired. In the mid-1990s, for instance, engineers at MIT worked on a RoboTuna, also nicknamed Charlie. Modeled loosely on a blue-fin tuna, it had a propulsion system that mimicked the tail fin of a real fish. This was a big departure from the screws or propellers used on UUVs like SPURV. But this Charlie never swam on its own; it was always tethered to a bank of instruments. The MIT group’s next effort, a RoboPike called Wanda, overcame this limitation and swam freely, but never learned to avoid running into the sides of its tank.

Fast-forward 25 years, and a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled SoFi, a decidedly more fishy robot designed to swim next to real fish without disturbing them. Controlled by a retrofitted Super Nintendo handset, SoFi could dive more than 15 meters, control its own buoyancy, and swim around for up to 40 minutes between battery charges. Noting that SoFi’s creators tested their robot fish in the gorgeous waters off Fiji, IEEE Spectrum’s Evan Ackerman noted, “Part of me is convinced that roboticists take on projects like these…because it’s a great way to justify a trip somewhere exotic.”

SoFi, Wanda, and both Charlies are all examples of biomimetics, a term coined in 1974 to describe the study of biological mechanisms, processes, structures, and substances. Biomimetics looks to nature to inspire design.

Sometimes, the resulting technology proves to be more efficient than its natural counterpart, as Richard James Clapham discovered while researching robotic fish for his Ph.D. at the University of Essex, in England. Under the supervision of robotics expert Huosheng Hu, Clapham studied the swimming motion of Cyprinus carpio, the common carp. He then developed four robots that incorporated carplike swimming, the most capable of which was iSplash-II. When tested under ideal conditions—that is, a tank 5 meters long, 2 meters wide, and 1.5 meters deep—iSpash-II obtained a maximum velocity of 11.6 body lengths per second (or about 3.7 m/s). That’s faster than a real carp, which averages a top velocity of 10 body lengths per second. But iSplash-II fell short of the peak performance of a fish darting quickly to avoid a predator.

Of course, swimming in a test pool or placid lake is one thing; surviving the rough and tumble of a breaking wave is another matter. The latter is something that roboticist Kathryn Daltorio has explored in depth.

Daltorio, an assistant professor at Case Western Reserve University and codirector of the Center for Biologically Inspired Robotics Research there, has studied the movements of cockroaches, earthworms, and crabs for clues on how to build better robots. After watching a crab navigate from the sandy beach to shallow water without being thrown off course by a wave, she was inspired to create an amphibious robot with tapered, curved feet that could dig into the sand. This design allowed her robot to withstand forces up to 138 percent of its body weight.

Photo: Nicole Graf

This robotic crab created by Case Western’s Kathryn Daltorio imitates how real crabs grab the sand to avoid being toppled by waves.

In her designs, Daltorio is following architect Louis Sullivan’s famous maxim: Form follows function. She isn’t trying to imitate the aesthetics of nature—her robot bears only a passing resemblance to a crab—but rather the best functionality. She looks at how animals interact with their environments and steals evolution’s best ideas.

And yet, Daltorio admits, there is also a place for realistic-looking robotic fish, because they can capture the imagination and spark interest in robotics as well as nature. And unlike a hyperrealistic humanoid, a robotic fish is unlikely to fall into the creepiness of the uncanny valley.

In writing this column, I was delighted to come across plenty of recent examples of such robotic fish. Ryomei Engineering, a subsidiary of Mitsubishi Heavy Industries, has developed several: a robo-coelacanth, a robotic gold koi, and a robotic carp. The coelacanth was designed as an educational tool for aquariums, to present a lifelike specimen of a rarely seen fish that is often only known by its fossil record. Meanwhile, engineers at the University of Kitakyushu in Japan created Tai-robot-kun, a credible-looking sea bream. And a team at Evologics, based in Berlin, came up with the BOSS manta ray.

Whatever their official purpose, these nature-inspired robocreatures can inspire us in return. UUVs that open up new and wondrous vistas on the world’s oceans can extend humankind’s ability to explore. We create them, and they enhance us, and that strikes me as a very fair and worthy exchange.

This article appears in the March 2021 print issue as “Catfish, Robot, Swimmer, Spy.”

About the Author
Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society. Continue reading

Posted in Human Robots

#438749 Folding Drone Can Drop Into Inaccessible ...

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021. Continue reading

Posted in Human Robots

#438747 The appearance of robots affects our ...

'Moralities of Intelligent Machines' is a project that investigates people's attitudes towards moral choices made by artificial intelligence. In the latest study completed under the project, study participants read short narratives where either a robot, a somewhat humanoid robot known as iRobot, a robot with a strong humanoid appearance called iClooney or a human being encounters a moral problem along the lines of the trolley dilemma, making a specific decision. The participants were also shown images of these agents, after which they assessed the morality of their decisions. The study was funded by the Jane and Aatos Erkko Foundation and the Academy of Finland. Continue reading

Posted in Human Robots

#438731 Video Friday: Perseverance Lands on Mars

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

Hmm, did anything interesting happen in robotics yesterday week?

Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:

[ Mars 2020 ]

PLEN hopes you had a happy Valentine's Day!

[ PLEN ]

Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.

[ Unitree ]

Thanks Xingxing!

Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.

[ Spy in the Wild ]

Tails, it turns out, are useful for almost everything.

[ DART Lab ]

Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.

[ Extend Robotics ]

Fotokite is a safe, practical way to do local surveillance with a drone.

I just wish they still had a consumer version 🙁

[ Fotokite ]

How to confuse fish.

[ Harvard ]

Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.

[ Army ]

Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!

[ Flexiv ]

Thanks Yunfan!

An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.

[ Paper ]

Thanks Poramate!

A short film about Robot Wars from Blender Magazine in 1995.

[ YouTube ]

While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.

[ ETHZ ]

Thanks Markus!

Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area

[ IFRR ]

Thanks Fan!

Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.

[ IFRR ]

At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

[ UPenn ] Continue reading

Posted in Human Robots