Tag Archives: better

#439804 How Quantum Computers Can Be Used to ...

Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we’ve experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world.

Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers.

More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI.

Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design.

“Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.”

At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems.

This was the basis for Google’s groundbreaking display of “quantum supremacy” in 2019. The company’s researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world’s fastest supercomputer.

This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place.

This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered.

Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using.

Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium.

It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it’s possible to manipulate it using the same control technologies.

The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more.

But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today’s rudimentary quantum computers can help design future devices, they’ve opened the door to a virtuous cycle that could significantly speed innovation in this field.

Image Credit: Pete Linforth from Pixabay Continue reading

Posted in Human Robots

#439127 Cobots Act Like Puppies to Better ...

Human-robot interaction goes both ways. You’ve got robots understanding (or attempting to understand) humans, as well as humans understanding (or attempting to understand) robots. Humans, in my experience, are virtually impossible to understand even under the best of circumstances. But going the other way, robots have all kinds of communication tools at their disposal. Lights, sounds, screens, haptics—there are lots of options. That doesn’t mean that robot to human (RtH) communication is easy, though, because the ideal communication modality is something that is low cost and low complexity while also being understandable to almost anyone.

One good option for something like a collaborative robot arm can be to use human-inspired gestures (since it doesn’t require any additional hardware), although it’s important to be careful when you start having robots doing human stuff, because it can set unreasonable expectations if people think of the robot in human terms. In order to get around this, roboticists from Aachen University are experimenting with animal-like gestures for cobots instead, modeled after the behavior of puppies. Puppies!

For robots that are low-cost and appearance-constrained, animal-inspired (zoomorphic) gestures can be highly effective at state communication. We know this because of tails on Roombas:

While this is an adorable experiment, adding tails to industrial cobots is probably not going to happen. That’s too bad, because humans have an intuitive understanding of dog gestures, and this extends even to people who aren’t dog owners. But tails aren’t necessary for something to display dog gestures; it turns out that you can do it with a standard robot arm:

In a recent preprint in IEEE Robotics and Automation Letters (RA-L), first author Vanessa Sauer used puppies to inspire a series of communicative gestures for a Franka Emika Panda arm. Specifically, the arm was to be used in a collaborative assembly task, and needed to communicate five states to the human user, including greeting the user, prompting the user to take a part, waiting for a new command, an error condition when a container was empty of parts, and then shutting down. From the paper:

For each use case, we mirrored the intention of the robot (e.g., prompting the user to take a part) to an intention, a dog may have (e.g., encouraging the owner to play). In a second step, we collected gestures that dogs use to express the respective intention by leveraging real-life interaction with dogs, online videos, and literature. We then translated the dog gestures into three distinct zoomorphic gestures by jointly applying the following guidelines inspired by:

Mimicry. We mimic specific dog behavior and body language to communicate robot states.
Exploiting structural similarities. Although the cobot is functionally designed, we exploit certain components to make the gestures more “dog-like,” e.g., the camera corresponds to the dog’s eyes, or the end-effector corresponds to the dog’s snout.
Natural flow. We use kinesthetic teaching and record a full trajectory to allow natural and flowing movements with increased animacy.

A user study comparing the zoomorphic gestures to a more conventional light display for state communication during the assembly task showed that the zoomorphic gestures were easily recognized by participants as dog-like, even if the participants weren’t dog people. And the zoomorphic gestures were also more intuitively understood than the light displays, although the classification of each gesture wasn’t perfect. People also preferred the zoomorphic gestures over more abstract gestures designed to communicate the same concept. Or as the paper puts it, “Zoomorphic gestures are significantly more attractive and intuitive and provide more joy when using.” An online version of the study is here, so give it a try and provide yourself with some joy.

While zoomorphic gestures (at least in this very preliminary research) aren’t nearly as accurate at state communication as using something like a screen, they’re appealing because they’re compelling, easy to understand, inexpensive to implement, and less restrictive than sounds or screens. And there’s no reason why you can’t use both!

For a few more details, we spoke with the first author on this paper, Vanessa Sauer.

IEEE Spectrum: Where did you get the idea for this research from, and why do you think it hasn't been more widely studied or applied in the context of practical cobots?

Vanessa Sauer: I'm a total dog person. During a conversation about dogs and how their ways of communicating with their owner has evolved over time (e.g., more expressive face, easy to understand even without owning a dog), I got the rough idea for my research. I was curious to see if this intuitive understanding many people have of dog behavior could also be applied to cobots that communicate in a similar way. Especially in social robotics, approaches utilizing zoomorphic gestures have been explored. I guess due to the playful nature, less research and applications have been done in the context of industry robots, as they often have a stronger focus on efficiency.

How complex of a concept can be communicated in this way?

In our “proof-of-concept” style approach, we used rather basic robot states to be communicated. The challenge with more complex robot states would be to find intuitive parallels in dog behavior. Nonetheless, I believe that more complex states can also be communicated with dog-inspired gestures.

How would you like to see your research be put into practice?

I would enjoy seeing zoomorphic gestures offered as modality-option on cobots, especially cobots used in industry. I think that could have the potential to reduce inhibitions towards collaborating with robots and make the interaction more fun.

Photos, Robots: Franka Emika; Dogs: iStockphoto

Zoomorphic Gestures for Communicating Cobot States, by Vanessa Sauer, Axel Sauer, and Alexander Mertens from Aachen University and TUM, will be published in
RA-L. Continue reading

Posted in Human Robots

#438285 Untethered robots that are better than ...

“Atlas” and “Handle” are just two of the amazing AI robots in the arsenal of Boston Dynamics. Atlas is an untethered whole-body humanoid with human-level dexterity. Handle is the guy for moving boxes in the warehouse. It can also unload … Continue reading

Posted in Human Robots

#439100 Video Friday: Robotic Eyeball Camera

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

What if seeing devices looked like us? Eyecam is a prototype exploring the potential future design of sensing devices. Eyecam is a webcam shaped like a human eye that can see, blink, look around and observe us.

And it's open source, so you can build your own!

[ Eyecam ]

Looks like Festo will be turning some of its bionic robots into educational kits, which is a pretty cool idea.

[ Bionics4Education ]

Underwater soft robots are challenging to model and control because of their high degrees of freedom and their intricate coupling with water. In this paper, we present a method that leverages the recent development in differentiable simulation coupled with a differentiable, analytical hydrodynamic model to assist with the modeling and control of an underwater soft robot. We apply this method to Starfish, a customized soft robot design that is easy to fabricate and intuitive to manipulate.

[ MIT CSAIL ]

Rainbow Robotics, the company who made HUBO, has a new collaborative robot arm.

[ Rainbow Robotics ]

Thanks Fan!

We develop an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker. The platform also demonstrates the potential of a novel manufacturing process, which incorporates adaptive and collaborative intelligence to improve the efficiency of mass customization for the factory of the future.

[ Paper ]

Thanks Poramate!

In Sevastopol State University the team of the Laboratory of Underwater Robotics and Control Systems and Research and Production Association “Android Technika” performed tests of an underwater anropomorphic manipulator robot.

[ Sevastopol State ]

Thanks Fan!

Taiwanese company TCI Gene created a COVID test system based on their fully automated and enclosed gene testing machine QVS-96S. The system includes two ABB robots and carries out 1800 tests per day, operating 24/7. Every hour 96 virus samples tests are made with an accuracy of 99.99%.

[ ABB ]

A short video showing how a Halodi Robotics can be used in a commercial guarding application.

[ Halodi ]

During the past five years, under the NASA Early Space Innovations program, we have been developing new design optimization methods for underactuated robot hands, aiming to achieve versatile manipulation in highly constrained environments. We have prototyped hands for NASA’s Astrobee robot, an in-orbit assistive free flyer for the International Space Station.

[ ROAM Lab ]

The new, improved OTTO 1500 is a workhorse AMR designed to move heavy payloads through demanding environments faster than any other AMR on the market, with zero compromise to safety.

[ ROAM Lab ]

Very, very high performance sensing and actuation to pull this off.

[ Ishikawa Group ]

We introduce a conversational social robot designed for long-term in-home use to help with loneliness. We present a novel robot behavior design to have simple self-reflection conversations with people to improve wellness, while still being feasible, deployable, and safe.

[ HCI Lab ]

We are one of the 5 winners of the Start-up Challenge. This video illustrates what we achieved during the Swisscom 5G exploration week. Our proof-of-concept tele-excavation system is composed of a Menzi Muck M545 walking excavator automated & customized by Robotic Systems Lab and IBEX motion platform as the operator station. The operator and remote machine are connected for the first time via a 5G network infrastructure which was brought to our test field by Swisscom.

[ RSL ]

This video shows LOLA balancing on different terrain when being pushed in different directions. The robot is technically blind, not using any camera-based or prior information on the terrain (hard ground is assumed).

[ TUM ]

Autonomous driving when you cannot see the road at all because it's buried in snow is some serious autonomous driving.

[ Norlab ]

A hierarchical and robust framework for learning bipedal locomotion is presented and successfully implemented on the 3D biped robot Digit. The feasibility of the method is demonstrated by successfully transferring the learned policy in simulation to the Digit robot hardware, realizing sustained walking gaits under external force disturbances and challenging terrains not included during the training process.

[ OSU ]

This is a video summary of the Center for Robot-Assisted Search and Rescue's deployments under the direction of emergency response agencies to more than 30 disasters in five countries from 2001 (9/11 World Trade Center) to 2018 (Hurricane Michael). It includes the first use of ground robots for a disaster (WTC, 2001), the first use of small unmanned aerial systems (Hurricane Katrina 2005), and the first use of water surface vehicles (Hurricane Wilma, 2005).

[ CRASAR ]

In March, a team from the Oxford Robotics Institute collected a week of epic off-road driving data, as part of the Sense-Assess-eXplain (SAX) project.

[ Oxford Robotics ]

As a part of the AAAI 2021 Spring Symposium Series, HEBI Robotics was invited to present an Industry Talk on the symposium's topic: Machine Learning for Mobile Robot Navigation in the Wild. Included in this presentation was a short case study on one of our upcoming mobile robots that is being designed to successfully navigate unstructured environments where today's robots struggle.

[ HEBI Robotics ]

Thanks Hardik!

This Lockheed Martin Robotics Seminar is from Chad Jenkins at the University of Michigan, on “Semantic Robot Programming… and Maybe Making the World a Better Place.”

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

[ UMD ] Continue reading

Posted in Human Robots

#439095 DARPA Prepares for the Subterranean ...

The DARPA Subterranean Challenge Final Event is scheduled to take place at the Louisville Mega Cavern in Louisville, Kentucky, from September 21 to 23. We’ve followed SubT teams as they’ve explored their way through abandoned mines, unfinished nuclear reactors, and a variety of caves, and now everything comes together in one final course where the winner of the Systems Track will take home the $2 million first prize.

It’s a fitting reward for teams that have been solving some of the hardest problems in robotics, but winning isn’t going to be easy, and we’ll talk with SubT Program Manager Tim Chung about what we have to look forward to.

Since we haven’t talked about SubT in a little while (what with the unfortunate covid-related cancellation of the Systems Track Cave Circuit), here’s a quick refresher of where we are: the teams have made it through the Tunnel Circuit, the Urban Circuit, and a virtual version of the Cave Circuit, and some of them have been testing in caves of their own. The Final Event will include all of these environments, and the teams of robots will have 60 minutes to autonomously map the course, locating artifacts to score points. Since I’m not sure where on Earth there’s an underground location that combines tunnels and caves with urban structures, DARPA is going to have to get creative, and the location in which they’ve chosen to do that is Louisville, Kentucky.

The Louisville Mega Cavern is a former limestone mine, most of which is under the Louisville Zoo. It’s not all that deep, mostly less than 30 meters under the surface, but it’s enormous: with 370,000 square meters of rooms and passages, the cavern currently hosts (among other things) a business park, a zipline course, and mountain bike trails, because why not. While DARPA is keeping pretty quiet on the details, I’m guessing that they’ll be taking over a chunk of the cavern and filling it with features representing as many of the environmental challenges as they can.

To learn more about how the SubT Final Event is going to go, we spoke with SubT Program Manager Tim Chung. But first, we talked about Tim’s perspective on the success of the Urban Circuit, and how teams have been managing without an in-person Cave Circuit.

IEEE Spectrum: How did the SubT Urban Circuit go?

Tim Chung: On a couple fronts, Urban Circuit was really exciting. We were in this unfinished nuclear power plant—I’d be surprised if any of the competitors had prior experience in such a facility, or anything like it. I think that was illuminating both from an experiential point of view for the competitors, but also from a technology point of view, too.

One thing that I thought was really interesting was that we, DARPA, didn't need to make the venue more challenging. The real world is really that hard. There are places that were just really heinous for these robots to have to navigate through in order to look in every nook and cranny for artifacts. There were corners and doorways and small corridors and all these kind of things that really forced the teams to have to work hard, and the feedback was, why did DARPA have to make it so hard? But we didn’t, and in fact there were places that for the safety of the robots and personnel, we had to ensure the robots couldn’t go.

It sounds like some teams thought this course was on the more difficult side—do you think you tuned it to just the right amount of DARPA-hard?

Our calibration worked quite well. We were able to tease out and help refine and better understand what technologies are both useful and critical and also those technologies that might not necessarily get you the leap ahead capability. So as an example, the Urban Circuit really emphasized verticality, where you have to be able to sense, understand, and maneuver in three dimensions. Being able to capitalize on their robot technologies to address that verticality really stratified the teams, and showed how critical those capabilities are.

We saw teams that brought a lot of those capabilities do very well, and teams that brought baseline capabilities do what they could on the single floor that they were able to operate on. And so I think we got the Goldilocks solution for Urban Circuit that combined both difficulty and ambition.

Photos: Evan Ackerman/IEEE Spectrum

Two SubT Teams embedded networking equipment in balls that they could throw onto the course.

One of the things that I found interesting was that two teams independently came up with throwable network nodes. What was DARPA’s reaction to this? Is any solution a good solution, or was it more like the teams were trying to game the system?

You mean, do we want teams to game the rules in any way so as to get a competitive advantage? I don't think that's what the teams were doing. I think they were operating not only within the bounds of the rules, which permitted such a thing as throwable sensors where you could stand at the line and see how far you could chuck these things—not only was that acceptable by the rules, but anticipated. Behind the scenes, we tried to do exactly what these teams are doing and think through different approaches, so we explicitly didn't forbid such things in our rules because we thought it's important to have as wide an aperture as possible.

With these comms nodes specifically, I think they’re pretty clever. They were in some cases hacked together with a variety of different sports paraphernalia to see what would provide the best cushioning. You know, a lot of that happens in the field, and what it captured was that sometimes you just need to be up at two in the morning and thinking about things in a slightly different way, and that's when some nuggets of innovation can arise, and we see this all the time with operators in the field as well. They might only have duct tape or Styrofoam or whatever the case may be and that's when they come up with different ways to solve these problems. I think from DARPA’s perspective, and certainly from my perspective, wherever innovation can strike, we want to try to encourage and inspire those opportunities. I thought it was great, and it’s all part of the challenge.

Is there anything you can tell us about what your original plan had been for the Cave Circuit?

I can say that we’ve had the opportunity to go through a number of these caves scattered all throughout the country, and engage with caving communities—cavers clubs, speleologists that conduct research, and then of course the cave rescue community. The single biggest takeaway
is that every cave, and there are tens of thousands of them in the US alone, every cave has its own personality, and a lot of that personality is quite hidden from humans, because we can’t explore or access all of the cave. This led us to a number of different caves that were intriguing from a DARPA perspective but also inspirational for our Cave Circuit Virtual Competition.

How do you feel like the tuning was for the Virtual Cave Circuit?

The Virtual Competition, as you well know, was exciting in the sense that we could basically combine eight worlds into one competition, whereas the systems track competition really didn’t give us that opportunity. Even if we were able have held the Cave Circuit Systems Competition in person, it would have been at one site, and it would have been challenging to represent the level of diversity that we could with the Virtual Competition. So I think from that perspective, it’s clearly an advantage in terms of calibration—diversity gets you the ability to aggregate results to capture those that excel across all worlds as well as those that do well in one world or some worlds and not the others. I think the calibration was great in the sense that we were able to see the gamut of performance. Those that did well, did quite well, and those that have room to grow showed where those opportunities are for them as well.

We had to find ways to capture that diversity and that representativeness, and I think one of the fun ways we did that was with the different cave world tiles that we were able to combine in a variety of different ways. We also made use of a real world data set that we were able to take from a laser scan. Across the board, we had a really great chance to illustrate why virtual testing and simulation still plays such a dominant role in robotics technology development, and why I think it will continue to play an increasing role for developing these types of autonomy solutions.

Photo: Team CSIRO Data 61

How can systems track teams learn from their testing in whatever cave is local to them and effectively apply that to whatever cave environment is part of the final considering what the diversity of caves is?

I think that hits the nail on the head for what we as technologists are trying to discover—what are the transferable generalizable insights and how does that inform our technology development? As roboticists we want to optimize our systems to perform well at the tasks that they were designed to do, and oftentimes that means specialization because we get increased performance at the expense of being a generalist robot. I think in the case of SubT, we want to have our cake and eat it too—we want robots that perform well and reliably, but we want them to do so not just in one environment, which is how we tend to think about robot performance, but we want them to operate well in many environments, many of which have yet to be faced.

And I think that's kind of the nuance here, that we want robot systems to be generalists for the sake of being able to handle the unknown, namely the real world, but still achieve a high level of performance and perhaps they do that to their combined use of different technologies or advances in autonomy or perception approaches or novel mechanisms or mobility, but somehow they're still able, at least in aggregate, to achieve high performance.

We know these teams eagerly await any type of clue that DARPA can provide like about the SubT environments. From the environment previews for Tunnel, Urban, and even Cave, the teams were pivoting around and thinking a little bit differently. The takeaway, however, was that they didn't go to a clean sheet design—their systems were flexible enough that they could incorporate some of those specialist trends while still maintaining the notion of a generalist framework.

Looking ahead to the SubT Final, what can you tell us about the Louisville Mega Cavern?

As always, I’ll keep you in suspense until we get you there, but I can say that from the beginning of the SubT Challenge we had always envisioned teams of robots that are able to address not only the uncertainty of what's right in front of them, but also the uncertainty of what comes next. So I think the teams will be advantaged by thinking through subdomain awareness, or domain awareness if you want to generalize it, whether that means tuning multi-purpose robots, or deploying different robots, or employing your team of robots differently. Knowing which subdomain you are in is likely to be helpful, because then you can take advantage of those unique lessons learned through all those previous experiences then capitalize on that.

As far as specifics, I think the Mega Cavern offers many of the features important to what it means to be underground, while giving DARPA a pretty blank canvas to realize our vision of the SubT Challenge.

The SubT Final will be different from the earlier circuits in that there’s just one 60-minute run, rather than two. This is going to make things a lot more stressful for teams who have experienced bad robot days—why do it this way?

The preliminary round has two 30-minute runs, and those two runs are very similar to how we have done it during the circuits, of a single run per configuration per course. Teams will have the opportunity to show that their systems can face the obstacles in the final course, and it's the sum of those scores much like we did during the circuits, to help mitigate some of the concerns that you mentioned of having one robot somehow ruin their chances at a prize.

The prize round does give DARPA as well as the community a chance to focus on the top six teams from the preliminary round, and allows us to understand how they came to be at the top of the pack while emphasizing their technological contributions. The prize round will be one and done, but all of these teams we anticipate will be putting their best robot forward and will show the world why they deserve to win the SubT Challenge.

We’ve always thought that when called upon these robots need to operate in really challenging environments, and in the context of real world operations, there is no second chance. I don't think it's actually that much of a departure from our interests and insistence on bringing reliable technologies to the field, and those teams that might have something break here and there, that's all part of the challenge, of being resilient. Many teams struggled with robots that were debilitated on the course, and they still found ways to succeed and overcome that in the field, so maybe the rules emphasize that desire for showing up and working on game day which is consistent, I think, with how we've always envisioned it. This isn’t to say that these systems have to work perfectly, they just have to work in a way such that the team is resilient enough to tackle anything that they face.

It’s not too late for teams to enter for both the Virtual Track and the Systems Track to compete in the SubT Final, right?

Yes, that's absolutely right. Qualifications are still open, we are eager to welcome new teams to join in along with our existing competitors. I think any dark horse competitors coming into the Finals may be able to bring something that we haven't seen before, and that would be really exciting. I think it'll really make for an incredibly vibrant and illuminating final event.

The final event qualification deadline for the Systems Competition is April 21, and the qualification deadline for the Virtual Competition is June 29. More details here. Continue reading

Posted in Human Robots