Tag Archives: bad
#439187 Video Friday: Good Robots for Bad Knees
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
ICRA 2021 – May 30-5, 2021 – [Online Event]
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Ascend is a smart knee orthosis designed to improve mobility and relieve knee pain. The customized, lightweight, and comfortable design reduces burden on the knee and intuitively adjusts support as needed. Ascend provides a safe and non-surgical solution for patients with osteoarthritis, knee instability, and/or weak quadriceps.
Each one of these is custom-built, and you can pre-order one now.
[ Roam Robotics ]
Ingenuity’s third flight achieved a longer flight time and more sideways movement than previously attempted. During the 80-second flight, the helicopter climbed to 16 feet (5 meters) and flew 164 feet (50 meters) downrange and back, for a total distance of 328 feet (100 meters). The third flight test took place at “Wright Brothers Field” in Jezero Crater, Mars, on April 25, 2021.
[ NASA ]
This right here, the future of remote work.
The robot will run you about $3,000 USD.
[ VStone ] via [ Robotstart ]
Texas-based aerospace robotics company, Wilder Systems, enhanced their existing automation capabilities to aid in the fight against COVID-19. Their recent development of a robotic testing system is both increasing capacity for COVID-19 testing and delivering faster results to individuals. The system conducts saliva-based PCR tests, which is considered the gold standard for COVID testing. Based on a protocol developed by Yale and authorized by the FDA, the system does not need additional approvals. This flexible, modular system can run up to 2,000 test samples per day, and can be deployed anywhere where standard electric power is available.
[ ARM Institute ]
Tests show that people do not like being nearly hit by drones.
But seriously, this research has resulted in some useful potential lessons for deploying drones in areas where they have a chance of interacting with humans.
[ Paper ]
The Ingenuity helicopter made history on April 19, 2021, with the first powered, controlled flight of an aircraft on another planet. How do engineers talk to a helicopter all the way out on Mars? We’ll hear about it from Nacer Chahat of NASA’s Jet Propulsion Laboratory, who worked on the helicopter’s antenna and telecommunication system.
[ NASA ]
A team of scientists from the Max Planck Institute for Intelligent Systems has developed a system with which they can fabricate miniature robots building block by building block, which function exactly as required.
[ Max Planck Institute ]
Well this was inevitable, wasn't it?
The pilot regained control and the drone was fine, though.
[ PetaPixel ]
NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on April 25, 2021, by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. As expected, the helicopter flew out of its field of vision while completing a flight plan that took it 164 feet (50 meters) downrange of the landing spot. Keep watching, the helicopter will return to stick the landing. Top speed for today's flight was about 2 meters per second, or about 4.5 miles-per-hour.
[ NASA ]
U.S. Naval Research Laboratory engineers recently demonstrated Hybrid Tiger, an electric unmanned aerial vehicle (UAV) with multi-day endurance flight capability, at Aberdeen Proving Grounds, Maryland.
[ NRL ]
This week's CMU RI Seminar is by Avik De from Ghost Robotics, on “Design and control of insect-scale bees and dog-scale quadrupeds.”
Did you watch the Q&A? If not, you should watch the Q&A.
[ CMU ]
Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. What does it take to make drones navigate as good or even better than human pilots?
[ GRASP Lab ]
With the current pandemic accelerating the revolution of AI in healthcare, where is the industry heading in the next 5-10 years? What are the key challenges and most exciting opportunities? These questions will be answered by HAI’s Co-Director, Fei-Fei Li and the Founder of DeepLearning.AI, Andrew Ng in this fireside chat virtual event.
[ Stanford HAI ]
Autonomous robots have the potential to serve as versatile caregivers that improve quality of life for millions of people with disabilities worldwide. Yet, physical robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and a lack of tools for benchmarking and training physically assistive robots. In this talk, I will present techniques towards addressing each of these core challenges in robotic caregiving.
[ GRASP Lab ]
What does it take to empower persons with disabilities, and why is educating ourselves on this topic the first step towards better inclusion? Why is developing assistive technologies for people with disabilities important in order to contribute to their integration in society? How do we implement the policies and actions required to enable everyone to live their lives fully? ETH Zurich and the Global Shapers Zurich Hub invited to an online dialogue on the topic “For a World without Barriers-Removing Obstacles in Daily Life for People with Disabilities.”
[ Cybathlon ] Continue reading
#439053 Bipedal Robots Are Learning To Move With ...
Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.
Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.
It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.
As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time.
Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.
IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?
Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.
Why is this an important problem for bipedal humanoid robots?
Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.
How can LOLA decide whether a surface is suitable for multi-contact locomotion?
LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.
Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.
Is LOLA capable of both proactive and reactive multi-contact locomotion?
The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.
What are some examples of unique capabilities that you are working towards with LOLA?
One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.
Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard.
A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.
While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.
We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too. Continue reading
#438998 Foam Sword Fencing With a PR2 Is the ...
Most of what we cover in the Human Robot Interaction (HRI) space involves collaboration, because collaborative interactions tend to be productive, positive, and happy. Yay! But sometimes, collaboration is not what you want. Sometimes, you want competition.
Competition between humans and robots doesn’t have to be a bad thing, in the same way that competition between humans and humans doesn’t have to be a bad thing. There are all kinds of scenarios in which humans respond favorably to competition, and exercise is an obvious example.
Studies have shown that humans can perform significantly better when they’re exercising competitively as opposed to when they’re exercising individually. And while researchers have looked at whether robots can be effective exercise coaches (they can be), there hasn’t been a lot of exploration of physical robots actually competing directly with humans. Roboticists from the University of Washington decided to put adversarial exercise robots to the test, and they did it by giving a PR2 a giant foam sword. Awesome.
This exercise game matches a PR2 with a human in a zero-sum competitive fencing game with foam swords. Expecting the PR2 to actually be a competitive fencer isn’t realistic because, like, it’s a PR2. Instead, the objective of the game is for the human to keep their foam sword within a target area near the PR2 while also avoiding the PR2’s low-key sword-waving. A VR system allows the user to see the target area, while also giving the system a way to track the user’s location and pose.
Looks like fun, right? It’s also exercise, at least in the sense that the user’s heart rate nearly doubled over their resting heart rate during the highest scoring game. This is super preliminary research, though, and there’s still a lot of work to do. It’ll be important to figure out how skilled a competitive robot should be in order to keep providing a reasonable challenge to a human who gradually improves over time, while also being careful to avoid generating any negative reactions. For example, the robot should probably not beat you over the head with its foam sword, even if that’s a highly effective strategy for getting your heart rate up.
Competitive Physical Human-Robot Game Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT, was presented as a late-breaking report at the ACM/IEEE International Conference on Human-Robot Interaction. Continue reading