Tag Archives: hour

#439100 Video Friday: Robotic Eyeball Camera

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

What if seeing devices looked like us? Eyecam is a prototype exploring the potential future design of sensing devices. Eyecam is a webcam shaped like a human eye that can see, blink, look around and observe us.

And it's open source, so you can build your own!

[ Eyecam ]

Looks like Festo will be turning some of its bionic robots into educational kits, which is a pretty cool idea.

[ Bionics4Education ]

Underwater soft robots are challenging to model and control because of their high degrees of freedom and their intricate coupling with water. In this paper, we present a method that leverages the recent development in differentiable simulation coupled with a differentiable, analytical hydrodynamic model to assist with the modeling and control of an underwater soft robot. We apply this method to Starfish, a customized soft robot design that is easy to fabricate and intuitive to manipulate.

[ MIT CSAIL ]

Rainbow Robotics, the company who made HUBO, has a new collaborative robot arm.

[ Rainbow Robotics ]

Thanks Fan!

We develop an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker. The platform also demonstrates the potential of a novel manufacturing process, which incorporates adaptive and collaborative intelligence to improve the efficiency of mass customization for the factory of the future.

[ Paper ]

Thanks Poramate!

In Sevastopol State University the team of the Laboratory of Underwater Robotics and Control Systems and Research and Production Association “Android Technika” performed tests of an underwater anropomorphic manipulator robot.

[ Sevastopol State ]

Thanks Fan!

Taiwanese company TCI Gene created a COVID test system based on their fully automated and enclosed gene testing machine QVS-96S. The system includes two ABB robots and carries out 1800 tests per day, operating 24/7. Every hour 96 virus samples tests are made with an accuracy of 99.99%.

[ ABB ]

A short video showing how a Halodi Robotics can be used in a commercial guarding application.

[ Halodi ]

During the past five years, under the NASA Early Space Innovations program, we have been developing new design optimization methods for underactuated robot hands, aiming to achieve versatile manipulation in highly constrained environments. We have prototyped hands for NASA’s Astrobee robot, an in-orbit assistive free flyer for the International Space Station.

[ ROAM Lab ]

The new, improved OTTO 1500 is a workhorse AMR designed to move heavy payloads through demanding environments faster than any other AMR on the market, with zero compromise to safety.

[ ROAM Lab ]

Very, very high performance sensing and actuation to pull this off.

[ Ishikawa Group ]

We introduce a conversational social robot designed for long-term in-home use to help with loneliness. We present a novel robot behavior design to have simple self-reflection conversations with people to improve wellness, while still being feasible, deployable, and safe.

[ HCI Lab ]

We are one of the 5 winners of the Start-up Challenge. This video illustrates what we achieved during the Swisscom 5G exploration week. Our proof-of-concept tele-excavation system is composed of a Menzi Muck M545 walking excavator automated & customized by Robotic Systems Lab and IBEX motion platform as the operator station. The operator and remote machine are connected for the first time via a 5G network infrastructure which was brought to our test field by Swisscom.

[ RSL ]

This video shows LOLA balancing on different terrain when being pushed in different directions. The robot is technically blind, not using any camera-based or prior information on the terrain (hard ground is assumed).

[ TUM ]

Autonomous driving when you cannot see the road at all because it's buried in snow is some serious autonomous driving.

[ Norlab ]

A hierarchical and robust framework for learning bipedal locomotion is presented and successfully implemented on the 3D biped robot Digit. The feasibility of the method is demonstrated by successfully transferring the learned policy in simulation to the Digit robot hardware, realizing sustained walking gaits under external force disturbances and challenging terrains not included during the training process.

[ OSU ]

This is a video summary of the Center for Robot-Assisted Search and Rescue's deployments under the direction of emergency response agencies to more than 30 disasters in five countries from 2001 (9/11 World Trade Center) to 2018 (Hurricane Michael). It includes the first use of ground robots for a disaster (WTC, 2001), the first use of small unmanned aerial systems (Hurricane Katrina 2005), and the first use of water surface vehicles (Hurricane Wilma, 2005).

[ CRASAR ]

In March, a team from the Oxford Robotics Institute collected a week of epic off-road driving data, as part of the Sense-Assess-eXplain (SAX) project.

[ Oxford Robotics ]

As a part of the AAAI 2021 Spring Symposium Series, HEBI Robotics was invited to present an Industry Talk on the symposium's topic: Machine Learning for Mobile Robot Navigation in the Wild. Included in this presentation was a short case study on one of our upcoming mobile robots that is being designed to successfully navigate unstructured environments where today's robots struggle.

[ HEBI Robotics ]

Thanks Hardik!

This Lockheed Martin Robotics Seminar is from Chad Jenkins at the University of Michigan, on “Semantic Robot Programming… and Maybe Making the World a Better Place.”

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

[ UMD ] Continue reading

Posted in Human Robots

#439081 Classify This Robot-Woven Sneaker With ...

For athletes trying to run fast, the right shoe can be essential to achieving peak performance. For athletes trying to run fast as humanly possible, a runner’s shoe can also become a work of individually customized engineering.

This is why Adidas has married 3D printing with robotic automation in a mass-market footwear project it’s called Futurecraft.Strung, expected to be available for purchase as soon as later this year. Using a customized, 3D-printed sole, a Futurecraft.Strung manufacturing robot can place some 2,000 threads from up to 10 different sneaker yarns in one upper section of the shoe.

Skylar Tibbits, founder and co-director of the Self-Assembly Lab and associate professor in MIT's Department of Architecture, says that because of its small scale, footwear has been an area of focus for 3D printing and additive manufacturing, which involves adding material bit by bit.

“There are really interesting complex geometry problems,” he says. “It’s pretty well suited.”

Photo: Adidas

Beginning with a 3D-printed sole, Adidas robots weave together some 2000 threads from up to 10 different sneaker yarns to make one Futurecraft.Strung shoe—expected on the marketplace later this year or sometime in 2022.

Adidas began working on the Futurecraft.Strung project in 2016. Then two years later, Adidas Futurecraft, the company’s innovation incubator, began collaborating with digital design studio Kram/Weisshaar. In less than a year the team built the software and hardware for the upper part of the shoe, called Strung uppers.

“Most 3D printing in the footwear space has been focused on the midsole or outsole, like the bottom of the shoe,” Tibbits explains. But now, he says, Adidas is bringing robotics and a threaded design to the upper part of the shoe. The company bases its Futurecraft.Strung design on high-resolution scans of how runners’ feet move as they travel.

This more flexible design can benefit athletes in multiple sports, according to an Adidas blog post. It will be able to use motion capture of an athlete’s foot and feedback from the athlete to make the design specific to the athlete’s specific gait. Adidas customizes the weaving of the shoe’s “fabric” (really more like an elaborate woven string figure, a cat’s cradle to fit the foot) to achieve a close and comfortable fit, the company says.

What they call their “4D sole” consists of a design combining 3D printing with materials that can change their shape and properties over time. In fact, Tibbits coined the term 4D printing to describe this process in 2013. The company takes customized data from the Adidas Athlete Intelligent Engine to make the shoe, according to Kram/Weisshaar’s website.

Photo: Adidas

Closeup of the weaving process behind a Futurecraft.Strung shoe

“With Strung for the first time, we can program single threads in any direction, where each thread has a different property or strength,” Fionn Corcoran-Tadd, an innovation designer at Adidas’ Futurecraft lab, said in a company video. Each thread serves a purpose, the video noted. “This is like customized string art for your feet,” Tibbits says.

Although the robotics technology the company uses has been around for many years, what Adidas’s robotic weavers can achieve with thread is a matter of elaborate geometry. “It’s more just like a really elegant way to build up material combining robotics and the fibers and yarns into these intricate and complex patterns,” he says.

Robots can of course create patterns with more precision than if someone wound it by hand, as well as rapidly and reliably changing the yarn and color of the fabric pattern. Adidas says it can make a single upper in 45 minutes and a pair of sneakers in 1 hour and 30 minutes. It plans to reduce this time down to minutes in the months ahead, the company said.

An Adidas spokesperson says sneakers incorporating the Futurecraft.Strung uppers design are a prototype, but the company plans to bring a Strung shoe to market in late 2021 or 2022. However, Adidas Futurecraft sneakers are currently available with a 3D-printed midsole.
Adidas plans to continue gathering data from athletes to customize the uppers of sneakers. “We’re building up a library of knowledge and it will get more interesting as we aggregate data of testing and from different athletes and sports,” the Adidas Futurecraft team writes in a blog post. “The more we understand about how data can become design code, the more we can take that and apply it to new Strung textiles. It’s a continuous evolution.” Continue reading

Posted in Human Robots

#439055 Stretch Is Boston Dynamics’ Take on a ...

Today, Boston Dynamics is announcing Stretch, a mobile robot designed to autonomously move boxes around warehouses. At first glance, you might be wondering why the heck this is a Boston Dynamics robot at all, since the dynamic mobility that we associate with most of their platforms is notably absent. The combination of strength and speed in Stretch’s arm is something we haven’t seen before in a mobile robot, and it’s what makes this a unique and potentially exciting entry into the warehouse robotics space.

Useful mobile manipulation in any environment that’s not almost entirely structured is still a significant challenge in robotics, and it requires a very difficult combination of sensing, intelligence, and dynamic motion, all of which are classic Boston Dynamics. But also classic Boston Dynamics is building really cool platforms, and only later trying to figure out a way of making them commercially viable. So why Stretch, why boxes, why now, and (the real question) why not Handle? We talk with Boston Dynamics’ Vice President of Product Engineering Kevin Blankespoor to find out.

Stretch is very explicitly a box-handling mobile robot for relatively well structured warehouses. It’s in no way designed to be a generalist that many of Boston Dynamics’ other robots are. And to be fair, this is absolutely how to make a robot that’s practical and cost effective right out of the crate: Identify a task that is dull or dirty or dangerous for humans, design a robot to do that task safely and efficiently, and deploy it with the expectation that it’ll be really good at that task but not necessarily much else. This is a very different approach than a robot like Spot, where the platform came first and the practical applications came later—with Stretch, it’s all about that specific task in a specific environment.

There are already robotic solutions for truck unloading, palletizing, and depalletizing, but Stretch seems to be uniquely capable. For truck unloading, the highest performance systems that I’m aware of are monstrous things (here’s one example from Honeywell) that use a ton of custom hardware to just sort of ingest the cargo within a trailer all at once. In a highly structured and predictable warehouse, this sort of thing may pay off over the long term, but it’s going to be extremely expensive and not very versatile at all.

Palletizing and depalletizing robots are much more common in warehouses today. They’re almost always large industrial arms surrounded by a network of custom conveyor belts and whatnot, suffering from the same sorts of constraints as a truck unloader— very capable in some situations, but generally high cost and low flexibility.

Photo: Boston Dynamics

Stretch is probably not going to be able to compete with either of these types of dedicated systems when it comes to sheer speed, but it offers lots of other critical advantages: It’s fast and easy to deploy, easy to use, and adaptable to a variety of different tasks without costly infrastructure changes. It’s also very much not Handle, which was Boston Dynamics’ earlier (although not that much earlier) attempt at a box-handling robot for warehouses, and (let’s be honest here) a much more Boston Dynamics-y thing than Stretch seems to be. To learn more about why the answer is Stretch rather than Handle, and how Stretch will fit into the warehouse of the very near future, we spoke with Kevin Blankespoor, Boston Dynamics’ VP of Product Engineering and chief engineer for both Handle and Stretch.

IEEE Spectrum: Tell me about Stretch!

Kevin Blankespoor: Stretch is the first mobile robot that we’ve designed specifically for the warehouse. It’s all about moving boxes. Stretch is a flexible robot that can move throughout the warehouse and do different tasks. During a typical day in the life of Stretch in the future, it might spend the morning on the inbound side of the warehouse unloading boxes from trucks. It might spend the afternoon in the aisles of the warehouse building up pallets to go to retailers and e-commerce facilities, and it might spend the evening on the outbound side of the warehouse loading boxes into the trucks. So, it really goes to where the work is.

There are already other robots that include truck unloading robots, palletizing and depalletizing robots, and mobile bases with arms on them. What makes Boston Dynamics the right company to introduce a new robot in this space?

We definitely thought through this, because there are already autonomous mobile robots [AMRs] out there. Most of them, though, are more like pallet movers or tote movers—they don't have an arm, and most of them are really just about moving something from point A to point B without manipulation capability. We've seen some experiments where people put arms on AMRs, but nothing that's made it very far in the market. And so when we started looking at Stretch, we realized we really needed to make a custom robot, and that it was something we could do quickly.

“We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.”

Stretch is built with pieces from Spot and Atlas and that gave us a big head start. For example, if you look at Stretch’s vision system, it's 2D cameras, depth sensors, and software that allows it to do obstacle detection, box detection, and localization. Those are all the same sensors and software that we've been using for years on our legged robots. And if you look closely at Stretch’s wrist joints, they're actually the same as Spot’s hips. They use the same electric motors, the same gearboxes, the same sensors, and they even have the same closed-loop controller controlling the joints.

If you were to buy an existing industrial robot arm with this kind of performance, it would be about four times heavier than the arm we built, and it's really hard to make that into a mobile robot. A lot of this came from our leg technology because it’s so important for our leg designs to be lightweight for the robots to balance. We took that same strength to weight advantage that we have, and built it into this arm. We're able to rapidly piece together things from our other robots to get us out of the gate quickly, so even though this looks like a totally different robot, we think we have a good head start going into this market.

At what point did you decide to go with an arm on a statically stable base on Stretch, rather than something more, you know, dynamic-y?

Stretch looks really different than the robots that Boston Dynamics has done in the past. But you'd be surprised how much similarity there is between our legged robots and Stretch under the hood. Looking back, we actually got our start on moving boxes with Atlas, and at that point it was just research and development. We were really trying to do force control for box grasping. We were picking up heavy boxes and maintaining balance and working on those fundamentals. We released a video of that as our first next-gen Atlas video, and it was interesting. We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.

So at this point we actually came up with Handle. The intent of Handle was to do a couple things—one was, we thought we could build a simpler robot that had Atlas’ attributes. Handle has a small footprint so it can fit in tight spaces, but it can pick up heavy boxes. And in addition to that, we had always really wanted to combine wheels and legs. We’d been talking about doing that for a decade and so Handle was a chance for us to try it.

We built a couple versions of Handle, and the first one was really just a prototype to kind of explore the morphology. But the second one was more purpose-built for warehouse tasks, and we started building pallets with that one and it looked pretty good. And then we started doing truck unloading with Handle, which was the pivotal moment. Handle could do it, but it took too long. Every time Handle grasped a box, it would have to roll back and then get to a place where it could spin itself to face forward and place the box, and trucks are very tight for a robot this size, so there's not a lot of room to maneuver. We knew the whole time that there was a robot like Stretch that was another alternative, but that's really when it became clear that Stretch would have a lot of advantages, and we started working on it about a year ago.

Stretch is certainly impressive in a practical way, but I’ll admit to really hoping that something like Handle could have turned out to be a viable warehouse robot.

I love the Handle project as well, and I’m very passionate about that robot. And there was a stage before we built Stretch where we thought, “this would be pretty standard looking compared to Handle, is it going to capture enough of the Boston Dynamics secret sauce?” But when you actually dissect all the problems within Stretch that you have to tackle, there are a lot of cool robotics problems left in there—the vision system, the planning, the manipulation, the grasping of the boxes—it's a lot harder to solve than it looks, and we're excited that we're actually getting fairly far down that road now.

What happens to Handle now?

Stretch has really taken over our team as far as warehouse products go. Handle we still use occasionally as a research robot, but it’s not actively under development. Stretch is really Handle’s descendent. Handle’s not retired, exactly, but we’re just using it for things like the dance video.

There’s still potential to do cool stuff with Handle. I do think that combining wheels with legs is very cool, and largely unexplored compared to its potential. So I still think that you're gonna see versions of robots combining wheels and legs like Handle, and maybe a version of Handle in the future that does more of that. But because we're switching this thread from research into product, Stretch is really the main focus now.

How autonomous is Stretch?

Stretch is semi-autonomous, and that means it really needs to work with people to tap into its full potential. With truck unloading, for example, a person will drive Stretch into the back of the truck and then basically point Stretch in the right direction and say go. And from that point on, everything’s autonomous. Stretch has its vision system and its mobility and it can detect all the boxes, grasp all boxes, and move them onto a conveyor all autonomously. This is something that takes people hours to do manually, and Stretch can go all the way until it gets to the last box, and the truck is empty. There are some parts of the truck unloading task that do require people, like verifying that the truck is in the right place and opening the doors. But this takes a person just a few minutes, and then the robot can spend hours or as long as it takes to do its job autonomously.

There are also other tasks in the warehouse where the autonomy will increase in the future. After truck unloading, the second thing we’ll take on is order building, which will be more in the aisles of a warehouse. For that, Stretch will be navigating around the warehouse, finding the right pallet it needs to take a box from, and loading it onto a new pallet. This will be a different model with more autonomy; you’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.

What kinds of constraints is Stretch operating under? Do the boxes all have to be stacked neatly in the back of the truck, do they have to be the same size, the same color, etc?

“This will be a different model with more autonomy. You’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.”

If you think about manufacturing, where there's been automation for decades, you can go into a modern manufacturing facility and there are robot arms and conveyors and other machines. But if you look at the actual warehouse space, 90+ percent is manually operated, and that's because of what you just asked about— things that are less structured, where there’s more variety, and it's more challenging for a robot. But this is starting to change. This is really, really early days, and you’re going to be seeing a lot more robots in the warehouse space.

The warehouse robotics industry is going to grow a lot over the next decade, and a lot of that boils down to vision—the ability for robots to navigate and to understand what they’re seeing. Actually seeing boxes in real world scenarios is challenging, especially when there's a lot of variety. We've been testing our machine learning-based box detection system on Pick for a few years now, and it's gotten far enough that we know it’s one of the technical hurdles you need to overcome to succeed in the warehouse.

Can you compare the performance of Stretch to the performance of a human in a box-unloading task?

Stretch can move cases up to 50 pounds which is the OSHA limit for how much a single person's allowed to move. The peak case rate for Stretch is 800 cases per hour. You really need to keep up with the flow of goods throughout the warehouse, and 800 cases per hour should be enough for most applications. This is similar to a really good human; most humans are probably slower, and it’s hard for a human to sustain that rate, and one of the big issues with people doing this jobs is injury rates. Imagine moving really heavy boxes all day, and having to reach up high or bend down to get them—injuries are really common in this area. Truck unloading is one of the hardest jobs in a warehouse, and that’s one of the reasons we’re starting there with Stretch.

Is Stretch safe for humans to be around?

We looked at using collaborative robot arms for Stretch, but they don’t have the combination of strength and speed and reach to do this task. That’s partially just due to the laws of physics—if you want to move a 50lb box really fast, that’s a lot of energy there. So, Stretch does need to maintain separation from humans, but it’s pretty safe when it’s operating in the back of a truck.

In the middle of a warehouse, Stretch will have a couple different modes. When it's traveling around it'll be kind of like an AMR, and use a safety-rated lidar making sure that it slows down or stops as people get closer. If it's parked and the arm is moving, it'll do the same thing, monitoring anyone getting close and either slow down or stop.

How do you see Stretch interacting with other warehouse robots?

For building pallet orders, we can do that in a couple of different ways, and we’re experimenting with partners in the AMR space. So you might have an AMR that moves the pallet around and then rendezvous with Stretch, and Stretch does the manipulation part and moves boxes onto the pallet, and then the AMR scuttles off to the next rendezvous point where maybe a different Stretch meets it. We’re developing prototypes of that behavior now with a few partners. Another way to do it is Stretch can actually pull the pallet around itself and do both tasks. There are two fundamental things that happen in the warehouse: there's movement of goods, and there's manipulation of goods, and Stretch can do both.

You’re aware that Hello Robot has a mobile manipulator called Stretch, right?

Great minds think alike! We know Aaron [Edsinger] from the Google days; we all used to be in the same company, and he’s a great guy. We’re in very different applications and spaces, though— Aaron’s robot is going into research and maybe a little bit into the consumer space, while this robot is on a much bigger scale aimed at industrial applications, so I think there’s actually a lot of space between our robots, in terms of how they’ll be used.

Editor’s Note: We did check in with Aaron Edsinger at Hello Robot, and he sees things a little bit differently. “We're disappointed they chose our name for their robot,” Edsinger told us. “We're seriously concerned about it and considering our options.” We sincerely hope that Boston Dynamics and Hello Robot can come to an amicable solution on this.
What’s the timeline for commercial deployment of Stretch?

This is a prototype of the Stretch robot, and anytime we design a new robot, we always like to build a prototype as quickly as possible so we can figure out what works and what doesn't work. We did that with our bipeds and quadrupeds as well. So, we get an early look at what we need to iterate, because any time you build the first thing, it's not the right thing, and you always need to make changes to get to the final version. We've got about six of those Stretch prototypes operating now. In parallel, our hardware team is finishing up the design of the productized version of Stretch. That version of Stretch looks a lot like the prototype, but every component has been redesigned from the ground up to be manufacturable, to be reliable, and to be higher performance.

For the productized version of Stretch, we’ll build up the first units this summer, and then it’ll go on sale next year. So this is kind of a sneak peak into what the final product will be.

How much does it cost, and will you be selling Stretch, or offering it as a service?

We’re not quite ready to talk about cost yet, but it’ll be cost effective, and similar in cost to existing systems if you were to combine an industrial robot arm, custom gripper, and mobile base. We’re considering both selling and leasing as a service, but we’re not quite ready to narrow it down yet.

Photo: Boston Dynamics

As with all mobile manipulators, what Stretch can do long-term is constrained far more by software than by hardware. With a fast and powerful arm, a mobile base, a solid perception system, and 16 hours of battery life, you can imagine how different grippers could enable all kinds of different capabilities. But we’re getting ahead of ourselves, because it’s a long, long way from getting a prototype to work pretty well to getting robots into warehouses in a way that’s commercially viable long-term, even when the use case is as clear as it seems to be for Stretch.

Stretch also could signal a significant shift in focus for Boston Dynamics. While Blankespoor’s comments about Stretch leveraging Boston Dynamics’ expertise with robots like Spot and Atlas are well taken, Stretch is arguably the most traditional robot that the company has designed, and they’ve done so specifically to be able to sell robots into industry. This is what you do if you’re a robotics company who wants to make money by selling robots commercially, which (historically) has not been what Boston Dynamics is all about. Despite its bonkers valuation, Boston Dynamics ultimately needs to make money, and robots like Stretch are a good way to do it. With that in mind, I wouldn’t be surprised to see more robots like this from Boston Dynamics—robots that leverage the company’s unique technology, but that are designed to do commercially useful tasks in a somewhat less flashy way. And if this strategy keeps Boston Dynamics around (while funding some occasional creative craziness), then I’m all for it. Continue reading

Posted in Human Robots

#438982 Quantum Computing and Reinforcement ...

Deep reinforcement learning is having a superstar moment.

Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.

Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.

But what if you could try multiple solutions at once?

This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.

The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.

Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.

The Bottleneck
Learning from trial and error comes intuitively to our brains.

Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)

Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.

“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.

Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.

The Hybrid AI
The new study went straight for that previously untenable jugular.

The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.

At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.

This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.

“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.

It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.

The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.

The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.

In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”

A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.

The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.

Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.

“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.

Image Credit: Oleg Gamulinskiy from Pixabay Continue reading

Posted in Human Robots

#438738 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
A New Artificial Intelligence Makes Mistakes—on Purpose
Will Knight | Wired
“It took about 50 years for computers to eviscerate humans in the venerable game of chess. A standard smartphone can now play the kind of moves that make a grandmaster’s head spin. But one artificial intelligence program is taking a few steps backward, to appreciate how average humans play—blunders and all.”

CRYPTOCURRENCY
Bitcoin’s Price Rises to $50,000 as Mainstream Institutions Hop On
Timothy B. Lee | Ars Technica
“Bitcoin’s price is now far above the previous peak of $19,500 reached in December 2017. Bitcoin’s value has risen by almost 70 percent since the start of 2021. No single factor seems to be driving the cryptocurrency’s rise. Instead, the price is rising as more and more mainstream organizations are deciding to treat it as an ordinary investment asset.”

SCIENCE
Million-Year-Old Mammoth Teeth Contain Oldest DNA Ever Found
Jeanne Timmons | Gizmodo
“An international team of scientists has sequenced DNA from mammoth teeth that is at least a million years old, if not older. This research, published today in Nature, not only provides exciting new insight into mammoth evolutionary history, it reveals an entirely unknown lineage of ancient mammoth.”

SCIENCE
Scientists Accidentally Discover Strange Creatures Under a Half Mile of Ice
Matt Simon | Wired
“i‘It’s like, bloody hell!’ Smith says. ‘It’s just one big boulder in the middle of a relatively flat seafloor. It’s not as if the seafloor is littered with these things.’ Just his luck to drill in the only wrong place. Wrong place for collecting seafloor muck, but the absolute right place for a one-in-a-million shot at finding life in an environment that scientists didn’t reckon could support much of it.”

BIOTECH
Highest-Resolution Images of DNA Reveal It’s Surprisingly Jiggly
George Dvorsky | Gizmodo
“Scientists have captured the highest-resolution images ever taken of DNA, revealing previously unseen twisting and squirming behaviors. …These hidden movements were revealed by computer simulations fed with the highest-resolution images ever taken of a single molecule of DNA. The new study is exposing previously unseen behaviors in the self-replicating molecule, and this research could eventually lead to the development of powerful new genetic therapies.”

TRANSPORTATION
The First Battery-Powered Tanker Is Coming to Tokyo
Maria Gallucci | IEEE Spectrum
“The Japanese tanker is Corvus’s first fully-electric coastal freighter project; the company hopes the e5 will be the first of hundreds more just like it. ‘We see it [as] a beachhead for the coastal shipping market globally,’ Puchalski said. ‘There are many other coastal freighter types that are similar in size and energy demand.’ The number of battery-powered ships has ballooned from virtually zero a decade ago to hundreds worldwide.”

SPACE
Report: NASA’s Only Realistic Path for Humans on Mars Is Nuclear Propulsion
Eric Berger | Ars Technica
“Conducted at the request of NASA, a broad-based committee of experts assessed the viability of two means of propulsion—nuclear thermal and nuclear electric—for a human mission launching to Mars in 2039. ‘One of the primary takeaways of the report is that if we want to send humans to Mars, and we want to do so repeatedly and in a sustainable way, nuclear space propulsion is on the path,’ said [JPL’s] Bobby Braun.”

NASA’s Perseverance Rover Successfully Lands on Mars
Joey Roulette | The Verge
“Perseverance hit Mars’ atmosphere on time at 3:48PM ET at speeds of about 12,100 miles per hour, diving toward the surface in an infamously challenging sequence engineers call the “seven minutes of terror.” With an 11-minute comms delay between Mars and Earth, the spacecraft had to carry out its seven-minute plunge at all by itself with a wickedly complex set of pre-programmed instructions.”

ENVIRONMENT
A First-of-Its-Kind Geoengineering Experiment Is About to Take Its First Step
James Temple | MIT Technology Review
“When I visited Frank Keutsch in the fall of 2019, he walked me down to the lab, where the tube, wrapped in gray insulation, ran the length of a bench in the back corner. By filling it with the right combination of gases, at particular temperatures and pressures, Keutsch and his colleagues had simulated the conditions some 20 kilometers above Earth’s surface. In testing how various chemicals react in this rarefied air, the team hoped to conduct a crude test of a controversial scheme known as solar geoengineering.”

Image Credit: Garcia / Unsplash Continue reading

Posted in Human Robots