Category Archives: Human Robots
#439247 Drones and Sensors Could Spot Fires ...
The speed at which a wildfire can rip through an area and wreak havoc is nothing short of awe-inspiring and terrifying. Early detection of these events is critical for fire management efforts, whether that involves calling in firefighters or evacuating nearby communities.
Currently, early fire detection in remote areas is typically done by satellite—but this approach can be hindered by cloud cover. What’s more, even the most advanced satellite systems detect fires once the burning area reaches an average seize of 18.4 km2 (7.1 square miles).
To detect wildfires earlier on, some researchers are proposing a novel solution that harnesses a network of Internet of Things (IoT) sensors and a fleet of drones, or unmanned aerial vehicles (UAVs). The researchers tested their approach through simulations, described in a study published May 5 in IEEE Internet of Things Journal, finding that it can detect fires that are just 2.5 km2 (just under one square mile) in size with near perfect accuracy.
Their idea is timely, as climate change is driving an increase in wildfires around many regions of the world, as seen recently in California and Australia.
“In the last few years, the number, frequency, and severity of wildfires have increased dramatically worldwide, significantly impacting countries’ economies, ecosystems, and communities. Wildfire management presents a significant challenge in which early fire detection is key,” emphasizes Osama Bushnaq, a senior researcher at the Autonomous Robotics Research Center of the Technology Innovation Institute in Abu Dhabi, who was involved in the study.
The approach that Bushnaq and his colleagues are proposing involves a network of IoT sensors scattered throughout regions of concern, such as a national park or forests situated near communities. If a fire ignites, IoT devices deployed in the area will detect it and wait until a patrolling UAV is within transmission range to report their measurements. If a UAV receives multiple positive detections by the IoT devices, it will notify the nearby firefighting department that a wildfire has been verified.
The researchers evaluated a number of different UAVs and IoT sensors based on cost and features to determine the optimal combinations. Next, they tested their UAV-IoT approach through simulations, whereby 420 IoT sensors were deployed and 18 UAVs patrolled per square kilometer of simulated forest. The system could detect fires covering 2.5 km2 with greater than 99 percent accuracy. For smaller fires covering 0.5 km2 the approach yielded 69 percent accuracy.
These results suggest that, if an optimal number of UAVs and IoT devices are present, wildfires can be detected in much shorter time than with the satellite imaging. But Bushnaq acknowledges that this approach has its limitations. “UAV-IoT networks can only cover relatively smaller areas,” he explains. “Therefore, the UAV-IoT network would be particularly suitable for wildfire detection at high-risk regions.”
For these reasons, the researchers are proposing that UAV-IoT approach be used alongside satellite imaging, which can cover vast areas but with less wildfire detection speed and reliability.
Moving forward, the team plans to explore ways of further improving upon this approach, for example by optimizing the trajectory of the UAVs or addressing issues related to the battery life of UAVs.
Bushnaq envisions such UAV-IoT systems having much broader applications, too. “Although the system is designed for wildfire detection, it can be used for monitoring different forest parameters, such as wind speed, moisture content, or temperature estimation,” he says, noting that such a system could also be extended beyond the forest setting, for example by monitoring oil spills in bodies of water. Continue reading
#439243 Scientists Added a Sense of Touch to a ...
Most people probably underestimate how much our sense of touch helps us navigate the world around us. New research has made it crystal clear after a robotic arm with the ability to feel was able to halve the time it took for the user to complete tasks.
In recent years, rapid advances in both robotics and neural interfaces have brought the dream of bionic limbs (like the one sported by Luke Skywalker in the Star Wars movies) within touching distance. In 2019, researchers even unveiled a robotic prosthetic arm with a sense of touch that the user could control with their thoughts alone.
But so far, these devices have typically relied on connecting to nerves and muscles in the patient’s residual upper arm. That has meant the devices don’t work for those who have been paralyzed or whose injuries have caused too much damage to those tissues.
That may be about to change, though. For the first time, researchers have allowed a patient to control a robotic arm using a direct connection to their brain while simultaneously receiving sensory information from the device. And by closing the loop, the patient was able to complete tasks in half the time compared to controlling the arm without any feedback.
“The control is so intuitive that I’m basically just thinking about things as if I were moving my own arm,” patient Nathan Copeland, who has been working with researchers at the University of Pittsburgh for six years, told NPR.
The results, reported in Science, build on previous work from the same team that showed they could use implants in Copeland’s somatosensory cortex to trigger sensations localized to regions of his hand, despite him having lost feeling and control thanks to a spinal cord injury.
The 28-year-old had also previously controlled an external robotic arm using a neural interface wired up to his motor cortex, but in the latest experiment the researchers combined the two strands of research, with impressive results.
In a series of tasks designed to test dexterity, including moving objects of different shapes and sizes and pouring from one cup to another, Copeland was able to reduce the time he took to complete these tasks from a median of 20 seconds to just 10, and his performance was often equivalent to that of an able-bodied person.
The sensory information that Copeland receives from the arm is still fairly rudimentary. Sensors measure torque in the joints at the base of the robotic fingers, which is then translated into electrical signals and transmitted to the brain. He reported that the feedback didn’t feel natural, but more like pressure or a gentle tingling.
But that’s still a lot more information than cab be gleaned from simply watching the hand’s motions, which is all he had to go on before. And the approach required almost no training, unlike other popular approaches based on sensory substitution that stimulate a patch of skin or provide visual or audio cues that the patient has to learn to associate with tactile sensations.
“We still have a long way to go in terms of making the sensations more realistic and bringing this technology to people’s homes, but the closer we can get to recreating the normal inputs to the brain, the better off we will be,” Robert Gaunt, a co-author of the paper, said in a press release.
“When even limited and imperfect sensation is restored, the person’s performance improved in a pretty significant way.”
An external robotic arm is still a long way from a properly integrated prosthetic, and it will likely require significant work to squeeze all the required technology into a more portable package. But Bolu Ajiboye, a neural engineer from Case Western Reserve University, told Wired that providing realistic sensory signals directly to the brain, and in particular ones that are relayed in real time, is a significant advance.
In a related perspective in Science, Aldo Faisal of Imperial College London said that the integration of a sense of touch may not only boost the performance of prosthetics, but also give patients a greater sense of ownership over their replacement limbs.
The breakthrough, he added, also opens up a host of interesting lines of scientific inquiry, including whether similar approaches could help advance robotics or be used to augment human perception with non-biological sensors.
Image Credit: RAEng_Publications from Pixabay Continue reading
#439241 The MIT humanoid robot: A dynamic ...
Creating robots that can perform acrobatic movements such as flips or spinning jumps can be highly challenging. Typically, in fact, these robots require sophisticated hardware designs, motion planners and control algorithms. Continue reading
#439237 Agility Robotics’ Cassie Is Now ...
Bipedal robots are a huge hassle. They’re expensive, complicated, fragile, and they spend most of their time almost but not quite falling over. That said, bipeds are worth it because if you want a robot to go everywhere humans go, the conventional wisdom is that the best way to do so is to make robots that can walk on two legs like most humans do. And the most frequent, most annoying two-legged thing that humans do to get places? Going up and down stairs.
Stairs have been a challenge for robots of all kinds (bipeds, quadrupeds, tracked robots, you name it) since, well, forever. And usually, when we see bipeds going up or down stairs nowadays, it involves a lot of sensing, a lot of computation, and then a fairly brittle attempt that all too often ends in tears for whoever has to put that poor biped back together again.
You’d think that the solution to bipedal stair traversal would just involve better sensing and more computation to model the stairs and carefully plan footsteps. But an approach featured in upcoming Robotics Science and Systems conference paper from Oregon State University and Agility Robotics does away will all of that out and instead just throws a Cassie biped at random outdoor stairs with absolutely no sensing at all. And it works spectacularly well.
A couple of things to bear in mind: Cassie is “blind” in the sense that it has no information about the stairs that it’s going up or down. The robot does get proprioceptive feedback, meaning that it knows what kind of contact its limbs are making with the stairs. Also, the researchers do an admirable job of keeping that safety tether slack, and Cassie isn’t being helped by it in the least—it’s just there to prevent a catastrophic fall.
What really bakes my noodle about this video is how amazing Cassie is at being kind of terrible at stair traversal. The robot is a total klutz: it runs into railings, stubs its toes, slips off of steps, misses steps completely, and occasionally goes backwards. Amazingly, Cassie still manages not only to not fall, but also to keep going until it gets where it needs to be.
And this is why this research is so exciting—rather than try to develop some kind of perfect stair traversal system that relies on high quality sensing and a lot of computation to optimally handle stairs, this approach instead embraces real-world constraints while managing to achieve efficient performance that’s real-world robust, if perhaps not the most elegant.
The secret to Cassie’s stair mastery isn’t much of a secret at all, since there’s a paper about it on arXiv. The researchers used reinforcement learning to train a simulated Cassie on permutations of stairs based on typical city building codes, with sets of stairs up to eight individual steps. To transfer the learned stair-climbing strategies (referred to as policies) effectively from simulation to the real world, the simulation included a variety of disturbances designed to represent the kinds of things that are hard to simulate accurately. For example, Cassie had its simulated joints messed with, its simulated processing speed tweaked, and even the simulated ground friction was jittered around. So, even though the simulation couldn’t perfectly mimic real ground friction, randomly mixing things up ensures that the controller (the software telling the robot how to move) gains robustness to a much wider range of situations.
One peculiarity of using reinforcement learning to train a robot is that even if you come up with something that works really well, it’s sometimes unclear exactly why. You may have noticed in the first video that the researchers are only able to hypothesize about the reasons for the controller’s success, and we asked one of the authors, Kevin Green, to try and explain what’s going on:
“Deep reinforcement learning has similar issues that we are seeing in a lot of machine learning applications. It is hard to understand the reasoning for why a learned controller performs certain actions. Is it exploiting a quirk of your simulation or your reward function? Is it perhaps stuck in a local minima? Sometimes the reward function is not specific enough and the policy can exhibit strange, vestigial behaviors simply because they are not rewarded or penalized. On the other hand, a reward function can be too constraining and can lead to a policy which doesn’t fully explore the space of possible actions, limiting performance. We do our best to ensure our simulation is accurate and that our rewards are objective and descriptive. From there, we really act more like biomechanists, observing a functioning system for hints as to the strategies that it is using to be highly successful.”
One of the strategies that they observed, first author Jonah Siekmann told us, is that Cassie does better on stairs when it’s moving faster, which is a bit of a counterintuitive thing for robots generally:
“Because the robot is blind, it can choose very bad foot placements. If it tries to place its foot on the very corner of a stair and shift its weight to that foot, the resulting force pushes the robot back down the stairs. At walking speed, this isn’t much of an issue because the robot’s momentum can overcome brief moments where it is being pushed backwards. At low speeds, the momentum is not sufficient to overcome a bad foot placement, and it will keep getting knocked backwards down the stairs until it falls. At high speeds, the robot tends to skip steps which pushes the robot closer to (and sometimes over) its limits.”
These bad foot placements are what lead to some of Cassie’s more impressive feats, Siekmann says. “Some of the gnarlier descents, where Cassie skips a step or three and recovers, were especially surprising. The robot also tripped on ascent and recovered in one step a few times. The physics are complicated, so to see those accurate reactions embedded in the learned controller was exciting. We haven’t really seen that kind of robustness before.” In case you’re worried that all of that robustness is in video editing, here’s an uninterrupted video of ten stair ascents and ten stair descents, featuring plenty of gnarliness.
We asked the researchers whether Cassie is better at stairs than a blindfolded human would be. “It’s difficult to say,” Siekmann told us. “We’ve joked lots of times that Cassie is superhuman at stair climbing because in the process of filming these videos we have tripped going up the stairs ourselves while we’re focusing on the robot or on holding a camera.”
A robot being better than a human at a dynamic task like this is obviously a very high bar, but my guess is that most of us humans are actually less prepared for blind stair navigation than Cassie is, because Cassie was explicitly trained on stairs that were uneven: “a small amount of noise (± 1cm) is added to the rise and run of each step such that the stairs are never entirely uniform, to prevent the policy from deducing the precise dimensions of the stairs via proprioception and subsequently overfitting to perfectly uniform stairs.” Speaking as someone who just tried jogging up my stairs with my eyes closed in the name of science, I absolutely relied on the assumption that my stairs were uniform. And when humans can’t rely on assumptions like that, it screws us up, even if we have eyeballs equipped.
Like most robot-y things, Cassie is operating under some significant constraints here. If Cassie seems even stompier than it usually is, that’s because it’s using this specific stair controller which is optimized for stairs and stair-like things but not much else.
“When you train neural networks to act as controllers, over time the learning algorithm refines the network so that it maximizes the reward specific to the environment that it sees,” explains Green. “This means that by training on flights of stairs, we get a very different looking controller compared to training on flat ground.” Green says that the stair controller works fine on flat ground, it’s just less efficient (and noisier). They’re working on ways of integrating multiple gait controllers that the robot can call on depending on what it’s trying to do; conceivably this might involve some very simple perception system just to tell the robot “hey look, there are some stairs, better engage stair mode.”
The paper ends with the statement that “this work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie.” I’m certainly surprised at Cassie’s stair capabilities, and it’ll be exciting to see what other environments this technique can be applied to. If there are limits, I’m sure that Cassie is going to try and find them.
Blind Bipedal Stair Traversal via Sim-to-Real Reinforcement Learning, by Jonah Siekmann, Kevin Green, John Warila, Alan Fern, and Jonathan Hurst from Oregon State University and Agility Robotics, will be presented at RSS 2021 in July. Continue reading
#439235 Video Friday: Intelligent Drone Swarms
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
ICRA 2021 – May 30-5, 2021 – [Online Event]
RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.
Drones in swarms (especially large swarms) generally rely on a centralized controller to keep them organized and from crashing into each other. But as swarms get larger and do more stuff, that's something that you can't always rely on, so folks at EPFL are working on a localized inter-drone communication system that can accomplish the same thing.
Predictive control of aerial swarms in cluttered environments, by Enrica Soria, Fabrizio Schiano and Dario Floreano from EPFL, is published this week in Nature.
[ EPFL ]
It takes a talented team of brilliant people to build Roxo, the first FedEx autonomous delivery robot. Watch this video to meet a few of the faces behind the bot–at FedEx Office and at DEKA Research.
Hey has anyone else noticed that the space between the E and the X in the FedEx logo looks kinda like an arrow?
[ FedEx ]
Thanks Fan!
Lingkang Zhang’s latest quadruped, ChiTu, runs ROS on a Raspberrypi 4B. Despite its mostly 3D printed-ness and low-cost servos, it looks to be quite capable.
[ Lingkang Zhang ]
Thanks Lingkang!
Wolfgang-OP is an open-source humanoid platform designed for RoboCup, which means it's very good at falling over and not exploding.
[ Hamburg Bit-Bots ]
Thanks Fan!
NASA’s Perseverance rover has been on the surface of Mars since February of 2021, joining NASA’s Curiosity rover, which has been studying the Red Planet since 2012. Perseverance is now beginning to ramp up its science mission on Mars while preparing to collect samples that will be returned to Earth on a future mission. Curiosity is ready to explore some new Martian terrain. This video provides a mission update from Perseverance Surface Mission Manager Jessica Samuels and Curiosity Deputy Project Scientist Abigail Fraeman.
[ NASA ]
It seems kinda crazy to me that this is the best solution for this problem, but I’m glad it works.
[ JHU LCSR ]
At USC’s Center for Advanced Manufacturing, we have developed a spray painting robot which we used to paint an USC themed Tommy Trojan mural.
[ USC ]
ABB Robotics is driving automation in the construction industry with new robotic automation solutions to address key challenges, including the need for more affordable and environmentally friendly housing and to reduce the environmental impact of construction, amidst a labor and skills shortage.
[ ABB ]
World’s first! Get to know our new avocado packing robot, the Speedpacker, which we have developed in conjunction with the machinery maker Selo. With this innovative robot, we pack avocados ergonomically and efficiently to be an even better partner for our customers and growers.
[ Nature's Pride ]
KUKA robots with high payload capacities were used for medical technology applications for the first time at the turn of the millennium. To this day, robots with payload capacities of up to 500 kilograms are a mainstay of medical robotics.
[ Kuka ]
We present a differential inverse kinematics control framework for task-space trajectory tracking, force regulation, obstacle and singularity avoidance, and pushing an object toward a goal location, with limited sensing and knowledge of the environment.
[ Dynamic Systems Lab ]
Should robots in the real world trust models? I wouldn't!
[ Science Robotics ]
Mark Muhn works together with the US FES CYBATHLON team Cleveland since 2012. For FES cycling he uses surgically implanted, intramuscular electrodes. In the CYBATHLON 2016 and 2020, Mark cycled on the first and the third place, respectively. At the past International IEEE EMBS Conference on Neural Engineering (NER21), he described the importance of user-centered design.
[ Cybathlon ]
This just-posted TEDx talk entitled “Towards the robots of science fiction” from Caltech's Aaron Aames was recorded back in 2019, which I mention only to alleviate any anxiety you might feel seeing so many people maskless indoors.
I don’t know exactly what Aaron was doing at 3:00, but I feel like we’ve all been there with one robot or another.
[ AMBER Lab ]
Are you ready for your close-up? Our newest space-exploring cameras are bringing the universe into an even sharper focus. Imaging experts on our Mars rovers teams will discuss how we get images from millions of miles away to your screens.
[ JPL ]
Some of the world's top universities have entered the DARPA Subterranean Challenge, developing technologies to map, navigate, and search underground environments. Led by CMU's Robotics Institute faculty members Sebastian Scherer and Matt Travers, as well as OSU's Geoff Hollinger, Team Explorer has earned first and second place positions in the first two rounds of competition. They look forward to this third and final year of the challenge, with the competition featuring all the subdomains of tunnel systems, urban underground, and cave networks. Sebastian, Matt, and Geoff discuss and demo some of the exciting technologies under development.
[ Explorer ]
An IFRR Global Robotics Colloquium on “The Future of Robotic Manipulation.”
Research in robotic manipulation has made tremendous progress in recent years. This progress has been brought about by researchers pursuing different, and possibly synergistic approaches. Prominent among them, of course, is deep reinforcement learning. It stands in opposition to more traditional, model-based approaches, which depend on models of geometry, dynamics, and contact. The advent of soft grippers and soft hands has led to substantial success, enabling many new applications of robotic manipulation. Which of these approaches represents the most promising route towards progress? Or should we combine them to push our field forward? How can we close the substantial gap between robotic and human manipulation capabilities? Can we identify and transfer principles of human manipulation to robots? These are some of the questions we will attempt to answer in this exciting panel discussion.
[ IFRR ] Continue reading