Tag Archives: robots

#440369 Legged Robots Learn to Hike Harsh ...

Robots, like humans, generally use two different sensory modalities when interacting with the world. There’s exteroceptive perception (or exteroception), which comes from external sensing systems like lidar, cameras, and eyeballs. And then there’s proprioceptive perception (or proprioception), which is internal sensing, involving things like touch, and force sensing. Generally, we humans use both of these sensing modalities at once to move around, with exteroception helping us plan ahead and proprioception kicking in when things get tricky. You use proprioception in the dark, for example, where movement is still totally possible—you just do it slowly and carefully, relying on balance and feeling your way around.
For legged robots, exteroception is what enables them to do all the cool stuff—with really good external sensing and the time (and compute) to do some awesome motion planning, robots can move dynamically and fast. Legged robots are much less comfortable in the dark, however, or really under any circumstances where the exteroception they need either doesn’t come through (because a sensor is not functional for whatever reason) or just totally sucks because of robot-unfriendly things like reflective surfaces or thick undergrowth or whatever. This is a problem because the real world is frustratingly full of robot-unfriendly things.
The research that the Robotic Systems Lab at ETH Zürich has published in Science Robotics showcases a control system that allows a legged robot to evaluate how reliable the exteroceptive information that it’s getting is. When the data are good, the robot plans ahead and moves quickly. But when the data set seems to be incomplete, noisy, or misleading, the controller gracefully degrades to proprioceptive locomotion instead. This means that the robot keeps moving—maybe more slowly and carefully, but it keeps moving—and eventually, it’ll get to the point where it can rely on exteroceptive sensing again. It’s a technique that humans and animals use, and now robots can use it too, combining speed and efficiency with safety and reliability to handle almost any kind of challenging terrain.
We got a compelling preview of this technique during the DARPA SubT Final Event last fall, when it was being used by Team Cerberus’s ANYmal legged robots to help them achieve victory. I’m honestly not sure whether the SubT final course was more or less challenging than some mountain climbing in Switzerland, but the performance in the video below is quite impressive, especially since ANYmal managed to complete the uphill portion of the hike 4 minutes faster than the suggested time for an average human.

Learning robust perceptive locomotion for quadrupedal robots in the wild

www.youtube.com

Those clips of ANYmal walking through dense vegetation and deep snow do a great job of illustrating how well the system functions. While the exteroceptive data is showing obstacles all over the place and wildly inaccurate ground height, the robot knows where its feet are, and relies on that proprioceptive data to keep walking forward safely and without falling. Here are some other examples showing common problems with sensor data that ANYmal is able to power through:

Other legged robots do use proprioception for reliable locomotion, but what’s unique here is this seamless combination of speed and robustness, with the controller moving between exteroception and proprioception based on how confident it is about what it's seeing. And ANYmal’s performance on this hike, as well as during the SubT Final, is ample evidence of how well this approach works.
For more details, we spoke with first author Takahiro Miki, a Ph.D. student in the Robotic Systems Lab at ETH Zürich and first author on the paper.
The paper’s intro says, “Until now, legged robots could not match the performance of animals in traversing challenging real-world terrain.” Suggesting that legged robots can now “match the performance of animals” seems very optimistic. What makes you comfortable with that statement?
Takahiro Miki: Achieving a level of mobility similar to animals is probably the goal for many of us researchers in this area. However, robots are still far behind nature and this paper is only a tiny step in this direction.
Your controller enables robust traversal of “harsh natural terrain.” What does “harsh” mean, and can you describe the kind of terrain that would be in the next level of difficulty beyond “harsh”?
Miki: We aim to send robots to places that are too dangerous or difficult to reach for humans. In this work, by “harsh”, we mean the places that are hard for us, not only for robots. For example, steep hiking trails or snow-covered trails that are tricky to traverse. With our approach, the robot traversed steep and wet rocky surfaces, dense vegetation, or rough terrain in underground tunnels or natural caves with loose gravels at human walking speed.
We think the next level would be somewhere which requires precise motion with careful planning such as stepping-stones, or some obstacles that require more dynamic motion, such as jumping over a gap.
How much do you think having a human choose the path during the hike helped the robot be successful?
Miki: The intuition of the human operator choosing a feasible path for the robot certainly helped the robot’s success. Even though the robot is robust, it cannot walk over obstacles which are physically impossible, e.g., obstacles bigger than the robot or cliffs. In other scenarios such as during the DARPA SubT Challenge however, a high-level exploration and path planning algorithm guides the robot. This planner is aware of the capabilities of the locomotion controller and uses geometric cues to guide the robot safely. Achieving this for an autonomous hike in a mountainous environment, where a more semantic environment understanding is necessary, is our future work.
What impressed you the most in terms of what the robot was able to handle?
Miki: The snow stairs were the very first experiment we conducted outdoors with the current controller, and I was surprised that the robot could handle the slippery snowy stairs. Also during the hike, the terrain was quite steep and challenging. When I first checked the terrain, I thought it might be too difficult for the robot, but it could just handle all of them. The open stairs were also challenging due to the difficulty of mapping. Because the lidar scan passes through the steps, the robot couldn’t see the stairs properly. But the robot was robust enough to traverse them.
At what point does the robot fall back to proprioceptive locomotion? How does it know if the data its sensors are getting are false or misleading? And how much does proprioceptive locomotion impact performance or capabilities?
Miki: We think the robot detects if the exteroception matches the proprioception through its feet contact or feet positions. If the map is correct, the feet get contact where the map suggests. Then the controller recognizes that the exteroception is correct and makes use of it. Once it experiences that the feet contact doesn’t match with the ground on the map, or the feet go below the map, it recognizes that exteroception is unreliable, and relies more on proprioception. We showed this in this supplementary video experiment:

Supplementary Robustness Evaluation

youtu.be

However, since we trained the neural network in an end-to-end manner, where the student policy just tries to follow the teacher’s action by trying to capture the necessary information in its belief state, we can only guess how it knows. In the initial approach, we were just directly inputting exteroception into the control policy. In this setup, the robot could walk over obstacles and stairs in the lab environment, but once we went outside, it failed due to mapping failures. Therefore, combining with proprioception was critical to achieve robustness.
How much are you constrained by the physical performance of the robot itself? If the robot were stronger or faster, would you be able to take advantage of that?
Miki: When we use reinforcement learning, the policy usually tries to use as much torque and speed as it is allowed to use. Therefore if the robot was stronger or faster, we think we could increase robustness further and overcome more challenging obstacles with faster speed.
What remains challenging, and what are you working on next?
Miki: Currently, we steered the robot manually for most of the experiments (except DARPA SubT Challenge). Adding more levels of autonomy is the next goal. As mentioned above, we want the robot to complete a difficult hike without human operators. Furthermore, there is big room for improvements in the locomotion capability of the robot. For “harsher” terrains, we want the robot to perceive the world in 3D and manifest richer behaviors such as jumping over stepping-stones or crawling under overhanging obstacles, which is not possible with current 2.5D elevation map. Continue reading

Posted in Human Robots

#440364 Sensor-Packed ‘Electronic Skin’ ...

Being able to beam yourself into a robotic body has all kinds of applications, from the practical to the fanciful. Existing interfaces that could make this possible tend to be bulky, but a wireless electronic skin made by Chinese researchers promises far more natural control.

While intelligent robots may one day be able to match humans’ dexterity and adaptability, they still struggle to carry out many of the tasks we’d like them to be able to do. In the meantime, many believe that creating ways for humans to teleoperate robotic bodies could be a useful halfway house.

The approach could be particularly useful for scenarios that are hazardous for humans yet still beyond the capabilities of autonomous robots. For instance, bomb disposal or radioactive waste cleanup, or more topically, medical professionals treating highly infectious patients.

While remote-controlled robots already exist, being able to control them through natural body movements could make the experience far more intuitive. It could also be crucial for developing practical robotic exoskeletons and better prosthetics, and even make it possible to create immersive entertainment experiences where users take control of a robotic body.

While solutions exist for translating human movement into signals for robots, it typically involves the use of cumbersome equipment that the user has to wear or complicated computer vision systems.

Now, a team of researchers from China has created a flexible electronic skin packed with sensors, wireless transmitters, and tiny vibrating magnets that can provide haptic feedback to the user. By attaching these patches to various parts of the body like the hand, forearm, or knee, the system can record the user’s movements and transmit them to robotic devices.

The research, described in a paper published in Science Advances, builds on rapid advances in flexible electronics in recent years, but its major contribution is packing many components into a compact, powerful, and user-friendly package.

The system’s sensors rely on piezoresistive materials, whose electrical resistance changes when subjected to mechanical stress. This allows them to act as bending sensors, so when the patches are attached to a user’s joint the change in resistance corresponds to the angle at which it is bent.

These sensors are connected to a central microcontroller via wiggly copper wires that wave up and down in a snake-like fashion. This zigzag pattern allows the wires to easily expand when stretched or bent, preventing them from breaking under stress. The voltage signals from the sensors are then processed and transmitted via Bluetooth, either directly to a nearby robotic device or a computer, which can then pass them on via a local network or the internet.

Crucially, the researchers have also built in a feedback system. The same piezoresistive sensors can be attached to parts of the robotic device, for instance on the fingertips where they can act as pressure sensors.

Signals from these sensors are transmitted to the electronic skin, where they are used to control tiny magnets that vibrate at different frequencies depending on how much pressure was applied. The researchers showed that humans controlling a robotic hand could use the feedback to distinguish between cubes of rubber with varying levels of hardness.

Importantly, the response time for the feedback signals was as low as 4 microseconds while operating directly over Bluetooth and just 350 microseconds operating over a local Wi-Fi network, which is below the 550 microseconds it takes for humans to react to tactile stimuli. Transmitting the signals over the internet led to considerably longer response times, though—between 30 and 50 milliseconds.

Nonetheless, the researchers showed that by combining different configurations of patches with visual feedback from VR goggles, human users could control a remote-controlled car with their fingers, use a robotic arm to carry out a COVID swab test, and even get a basic humanoid robot to walk, squat, clean a room, and help nurse a patient.

The patches are powered by an onboard lithium-ion battery that provides enough juice for all of its haptic feedback devices to operate continuously at full power for more than an hour. In standby mode it can last for nearly two weeks, and the device’s copper wires can even act as an antenna to wirelessly recharge the battery.

Inevitably, the system will still require considerable finessing before it can be used in real-world settings. But its impressive capabilities and neat design suggest that unobtrusive flexible sensors that could let us remotely control robots might not be too far away.

Image Credit: geralt / 23811 images Continue reading

Posted in Human Robots

#440348 Why Multi-Functional Robots Will Take ...

This is a sponsored article brought to you by Avidbots.
The days of having single-purpose robots for specific tasks are behind us. A robot must be multi-functional to solve today’s challenges, be cost-effective, and increase the productivity of an organization.
Yet, most indoor autonomous mobile robots (AMRs) today are specialized, often addressing a single application, service, or market. These robots are highly effective at completing the task at hand, however, they are limited to addressing a single use case. While this approach manages development costs and complexity for the developer, it may not be in the best interest of the customer.
To set the stage for increased growth, the commercial AMR market must evolve and challenge the status quo. A focus on integrating multiple applications and processes will increase overall productivity and efficiency of AMRs.

The market for autonomous mobile robots is expected to grow massively, and at Avidbots we see a unique opportunity to offer multi-application, highly effective robotic solutions.

Today, there are many application-specific AMRs solving problems for businesses. Common applications include indoor parcel delivery, security, inventory management, cleaning, and disinfection, to name a few.
The market for these types of AMRs is expected to grow into the tens of billions by 2025 as projected by Verified Market Research. This is a massive opportunity for growth for the AMR industry. It is also interesting to note that the sensor set and autonomous navigation capabilities of the various single application indoor AMRs today are similar.
Hence, there is an opportunity to combine useful functionalities into a single multi-application robot, and yet the industry as a whole has been slow to make such advancement.
Today's Robots Focus on Single Tasks

There’s never been a better time for the AMR industry to take strategic steps given the changes we’ve had to embrace as a result of the COVID-19 pandemic. In fact, there have been many robots brought to market recently that look to address disinfection, the majority of which have been single-purpose, including UVC robots.
With heightened standards of cleanliness in mind, let’s consider the potential of extending a cleaning robot from its single-use to performing both floor cleaning and high-touch surface disinfection.
In September 2021, Avidbots launched the Disinfection Add-On, expanding the functionality of the company’s fully autonomous floor-scrubbing robot, Neo. By simply adding a piece of hardware and pushing a software update, Avidbots' Neo, the floor-scrubbing robot, now serves multi-purposes.

The Future: Multi-Purpose Robots

Not only will multi-application robots like this example provide more value through additional convenience to end-customers; when compared to single application robots, the value derived also comes from the economic impact.

The economics of multi-application robots are simple. Combining two applications on one robot can deliver significant cost savings versus running two full single-use robots. For example, the price to rent a disinfection-only robot or a cleaning-only robot is in the neighborhood of US $2,000–3,000 per month per robot.
But Neo with its Disinfection Add-On extends beyond its primary function of floor cleaning to disinfect for a few hundred dollars per month. Disinfection is available at a cost that is around one-tenth of the price of a single-purpose disinfection robot or manual disinfection.

These savings can only be realized since the main cleaning function already pays for the AMR itself and the disinfection is merely a hardware and software extension.
There are other OEMs following this trend; Brain Corp. combines cleaning with shelf-scanning, leveraging existing autonomous floor-scrubbing robots as the platform. Similarly, Badger combines hazard analysis (spill detection, etc.) with a shelf-scanning robot as the platform.
Meet Neo 2, a Fully Autonomous Robotic Floor Cleaner
This video presents an overview of Neo 2, Avidbots' advanced robotic platform optimized for autonomous cleaning and sanitization. Neo is equipped with the Avidbots AI Platform featuring 10 sensors, resulting in 360° visibility and intelligent obstacle avoidance. Combined with integrated diagnostics and Avidbots Remote Assistance, Neo offers advanced autonomy, navigation, and safety capabilities.
Video: Avidbots

There are a few parallels between the current state of robotics today and the early computer industry of the 1970s. In the early '70s, when mainframes still dominated computer system sales, several manufacturers released low-cost desktop computers that were designed to support multiple applications, peripherals, and programming languages.
The low cost of desktop computers, the key “killer-apps,” and the large number of potential applications resulted in large growth and the proliferation of desktop computers worldwide, which eventually overtook mainframe sales in 1984.
As sales of AMRs increase and the cost of processing systems continue to drop, mass-produced AMR OEMs will likely be capable of delivering AMRs at a significantly lower price in the coming years. Computer systems like the NVIDIA Xavier NX, which are designed specifically for leading-edge robotic perception applications, paint a promising picture of the evolution of computer systems for indoor AMRs.
We look forward to a day in the near future when indoor AMRs will be sold at much less than US $10,000. Lowering the cost of AMRs is certainly a key to enabling larger and faster growth in the industry.
About Avidbots
Avidbots is a robotics company with a vision to bring robotic solutions into everyday life to increase organizational productivity and to do that better than any other company in the world.
Our groundbreaking product, the Neo autonomous floor scrubbing robot, is deployed around the world and trusted by leading facilities and building service companies. Headquartered in Kitchener, ON, Canada, Avidbots is offering comprehensive service and support to customers on five continents.
Learn more about Avidbots →

There is the open question of the “killer-app” in AMRs for commercial spaces. What application can best serve as a platform for multi-application robots?
Cleaning is certainly a candidate given that it's a service needed in most indoor spaces and saves two to four hours per night of manual labor. However, there are other industries such as the hospitality and food-service industry where parcel delivery has seen large growth and success since it saves many hours daily. In the examples above, customers will still likely benefit from having multiple potential applications in their AMRs.
While only time will tell how the industry will evolve, it's clear that delivering several applications with a single robot and at a much lower cost than multiple robots (or manual counterparts) has the potential to make AMRs more attractive. We can take the industry to new heights by continuing to push the boundaries, including developing multi-application robots that can be used across industries and allow organizations to focus on revenue-generating activities.
Our industry-leading multi-application solution is growing and so is our team of Avidbotters, including robotics engineers. If you’re interested in learning more about Avidbots or exploring career opportunities visit Avidbots. Continue reading

Posted in Human Robots

#440335 Creepy meets cool in humanoid robots at ...

A lifelike, child-size doll writhed and cried before slightly shocked onlookers snapping smartphone pictures Wednesday at the CES tech show—where the line between cool and slightly disturbing robots can be thin. Continue reading

Posted in Human Robots

#440276 Algorithm Uses Evolution To Design ...

Imagine you’re running a race. To complete it, your body needs to be strong, and your brain needs to keep track of the route, control your pace, and prevent you from tripping.
The same is true for robots. To accomplish tasks, they need both a well-designed body and a “brain,” or controller. Engineers can use various simulations to improve a robot’s control and make it smarter. But there are few ways to optimize a robot’s design at the same time.
Unless the designer is an algorithm.
Thanks to advances in computing, it’s finally possible to write software programs that optimize both design and control simultaneously, an approach known as co-design. Though there are established platforms to optimize control or design, most co-design researchers have had to devise their own testing platforms, and these are usually very computationally intensive and time-consuming.

To help solve this problem, Jagdeep Bhatia, an undergraduate researcher at MIT and other researchers created a 2D co-design soft robotics simulation system called Evolution Gym. They presented the system at this year’s Conference on Neural Information Processing Systems. They also detailed the system in a new paper.
“Basically, we tried to make like a really simple and fast simulator,” said Bhatia, the lead author of the paper. “And on top of that, we built like a bunch of tasks for these robots to do.”
In Evolution Gym, 2D soft robots are made up of colored cells, or voxels. Different colors represent different types of simple components–either soft or rigid material, and either horizontal or vertical actuators. The results are robots that are patchworks of colored squares, moving through video game-like environments. Because it is 2D and the program is simply designed, it doesn’t need much computational power.
As the name suggests, the researchers structured the system to mimic the biological process of evolution. Rather than generate individual robots, it generates populations of robots with slightly different designs. The system has a bi-level optimization system– an outer loop and an inner loop. The outer loop is the design optimization: The system generates a number of different designs for a given task, such as walking, jumping, climbing, or catching things. The inner loop is the control optimization.
The researchers found that the system was highly effective for many of the tasks, and that the algorithm-designed robots worked better than human-designed ones.
“It'll take each of those designs, it'll optimize the controller for it in Evolution Gym on a particular task,” said Bhatia. “And then it'll return a score back for each of those designs back to the design optimization algorithm and say, this is how well the robot performed with the optimal controller.”
In this way, the system generates multiple generations of robots based on a task-specific “reward” score, keeping elements that maintain and increase this reward. The researchers developed more than 30 tasks for the robots to attempt to perform, rated easy, medium, or hard.
“If your task is walking, in this case, you would like the robot to move as fast as possible within the amount of time,” said Wojciech Matusik, a professor of electrical engineering and computer science at MIT and senior author of the paper.
The researchers found that the system was highly effective for many of the tasks, and that the algorithm-designed robots worked better than human-designed ones. The system came up with designs that humans never could, generating complex patchworks of materials and actuators that were highly effective. The system also independently came up with some animal-like designs, though it had no previous knowledge of animals or biology.
On the other hand, no robot design could effectively complete the hardest tasks, such as lifting and catching items. There could be a number of reasons for this including the populations that the program selected to evolve were not diverse enough, said Wolfgang Fink, an associate professor of engineering at the University of Arizona who was not involved in the project.
“Diversity is the key,” he said. “If you don't have the diversity, then you get rapidly nice successes, but you level off very likely sub-optimally.” In the MIT researchers’ most effective algorithm, the percent of robots that “survived” each generation was between 60 and 0 percent, decreasing gradually over time.
Evolution Gym’s simplistic, 2D designs also do not lend themselves swell to being adapted into real-life robots. Nevertheless, Bhatia hopes that Evolution Gym can be a resource for researchers and can enable them to develop new and exciting algorithms for co-design. The program is open-source and free to use.
“I think you can still gain a lot of valuable insights from using Evolution Gym and proposing new algorithms and creating new algorithms within it,” he said. Continue reading

Posted in Human Robots