Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#440373 Video Friday: An Agile Year

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, GermanyCLAWAR 2022: 12–14 September 2022, Açores, PortugalLet us know if you have suggestions for next week, and enjoy today's videos.
Agility had a busy 2021. This is a long video, but there's new stuff in it (or new to me, anyway), including impressive manipulation skills, robust perceptive locomotion, jumping, and some fun costumes.

[ Agility Robotics ]
Houston Mechatronics is now Nauticus Robotics, and they have a fancy new video to prove it.

[ Nauticus ]
Club_KUKA is an unprecedented KUKA show cell that combines entertainment and robotics with technical precision and artistic value. All in all, the show cell is home to a cool group called the Kjays. A KR3 AGILUS at the drums loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR iiwa, which, mounted on the ceiling, keeps an eye on the unusual robot party.
And if that was too much for you to handle (?), here's “chill mode:”

[ Kuka ]
The most amazing venue for the 2022 Winter Olympics is the canteen.

[ SCMP ]
A mini documentary thing on ANYbotics from Kaspersky, the highlight of which is probably a young girl meeting ANYmal on the street and asking the important questions, like whether it comes in any other colors.

[ ANYbotics ]
If you’re looking for a robot that can carry out maintenance tasks, our teleoperation systems can give you just that. Think of it as remote hands that are able to perform tasks, without you having to be there, on location. You’re still in full control, as the robot hands will replicate your hand movements. You can control the robot from anywhere you like, even from home, which is a much safer and environmentally friendly approach.
[ Shadow Robot ]
If I had fingers like this, I'd be pretty awesome at manipulating cubes, too.

[ Yale ]
The open-source, artificially intelligent prosthetic leg designed by researchers at the University of Michigan will be brought to the research market by Humotech, a Pittsburgh-based assistive technology company. The goal of the collaboration is to speed the development of control software for robotic prosthetic legs, which have the potential to provide the power and natural gait of a human leg to prosthetic users.
[ Michigan Robotics ]
This video is worth watching entirely for the shoulder-dislocating high-five.

[ Paper ]
Of everything in this SoftBank Robotics 2021 rewind, my favorite highlight is the giant rubber duck avoidance.

[ SoftBank ]
On this episode of the Robot Brains Podcast, Pieter talks with David Rolnick about how machine learning can be applied to climate change.

[ Robot Brains ]
A talk from Stanford's Mark Cutkosky on “Selectively Soft Robotics: Integration Smart Materials in Soft Robotics.”

[ BDML ]
This is a very long video from Yaskawa, which goes over many (if not most or all) of the ways that its 500,000 industrial arms are currently being used. It's well labeled, so I recommend just skipping around to the interesting parts, like cow milking.

[ Yaskawa ] Continue reading

Posted in Human Robots

#440369 Legged Robots Learn to Hike Harsh ...

Robots, like humans, generally use two different sensory modalities when interacting with the world. There’s exteroceptive perception (or exteroception), which comes from external sensing systems like lidar, cameras, and eyeballs. And then there’s proprioceptive perception (or proprioception), which is internal sensing, involving things like touch, and force sensing. Generally, we humans use both of these sensing modalities at once to move around, with exteroception helping us plan ahead and proprioception kicking in when things get tricky. You use proprioception in the dark, for example, where movement is still totally possible—you just do it slowly and carefully, relying on balance and feeling your way around.
For legged robots, exteroception is what enables them to do all the cool stuff—with really good external sensing and the time (and compute) to do some awesome motion planning, robots can move dynamically and fast. Legged robots are much less comfortable in the dark, however, or really under any circumstances where the exteroception they need either doesn’t come through (because a sensor is not functional for whatever reason) or just totally sucks because of robot-unfriendly things like reflective surfaces or thick undergrowth or whatever. This is a problem because the real world is frustratingly full of robot-unfriendly things.
The research that the Robotic Systems Lab at ETH Zürich has published in Science Robotics showcases a control system that allows a legged robot to evaluate how reliable the exteroceptive information that it’s getting is. When the data are good, the robot plans ahead and moves quickly. But when the data set seems to be incomplete, noisy, or misleading, the controller gracefully degrades to proprioceptive locomotion instead. This means that the robot keeps moving—maybe more slowly and carefully, but it keeps moving—and eventually, it’ll get to the point where it can rely on exteroceptive sensing again. It’s a technique that humans and animals use, and now robots can use it too, combining speed and efficiency with safety and reliability to handle almost any kind of challenging terrain.
We got a compelling preview of this technique during the DARPA SubT Final Event last fall, when it was being used by Team Cerberus’s ANYmal legged robots to help them achieve victory. I’m honestly not sure whether the SubT final course was more or less challenging than some mountain climbing in Switzerland, but the performance in the video below is quite impressive, especially since ANYmal managed to complete the uphill portion of the hike 4 minutes faster than the suggested time for an average human.

Learning robust perceptive locomotion for quadrupedal robots in the wild

www.youtube.com

Those clips of ANYmal walking through dense vegetation and deep snow do a great job of illustrating how well the system functions. While the exteroceptive data is showing obstacles all over the place and wildly inaccurate ground height, the robot knows where its feet are, and relies on that proprioceptive data to keep walking forward safely and without falling. Here are some other examples showing common problems with sensor data that ANYmal is able to power through:

Other legged robots do use proprioception for reliable locomotion, but what’s unique here is this seamless combination of speed and robustness, with the controller moving between exteroception and proprioception based on how confident it is about what it's seeing. And ANYmal’s performance on this hike, as well as during the SubT Final, is ample evidence of how well this approach works.
For more details, we spoke with first author Takahiro Miki, a Ph.D. student in the Robotic Systems Lab at ETH Zürich and first author on the paper.
The paper’s intro says, “Until now, legged robots could not match the performance of animals in traversing challenging real-world terrain.” Suggesting that legged robots can now “match the performance of animals” seems very optimistic. What makes you comfortable with that statement?
Takahiro Miki: Achieving a level of mobility similar to animals is probably the goal for many of us researchers in this area. However, robots are still far behind nature and this paper is only a tiny step in this direction.
Your controller enables robust traversal of “harsh natural terrain.” What does “harsh” mean, and can you describe the kind of terrain that would be in the next level of difficulty beyond “harsh”?
Miki: We aim to send robots to places that are too dangerous or difficult to reach for humans. In this work, by “harsh”, we mean the places that are hard for us, not only for robots. For example, steep hiking trails or snow-covered trails that are tricky to traverse. With our approach, the robot traversed steep and wet rocky surfaces, dense vegetation, or rough terrain in underground tunnels or natural caves with loose gravels at human walking speed.
We think the next level would be somewhere which requires precise motion with careful planning such as stepping-stones, or some obstacles that require more dynamic motion, such as jumping over a gap.
How much do you think having a human choose the path during the hike helped the robot be successful?
Miki: The intuition of the human operator choosing a feasible path for the robot certainly helped the robot’s success. Even though the robot is robust, it cannot walk over obstacles which are physically impossible, e.g., obstacles bigger than the robot or cliffs. In other scenarios such as during the DARPA SubT Challenge however, a high-level exploration and path planning algorithm guides the robot. This planner is aware of the capabilities of the locomotion controller and uses geometric cues to guide the robot safely. Achieving this for an autonomous hike in a mountainous environment, where a more semantic environment understanding is necessary, is our future work.
What impressed you the most in terms of what the robot was able to handle?
Miki: The snow stairs were the very first experiment we conducted outdoors with the current controller, and I was surprised that the robot could handle the slippery snowy stairs. Also during the hike, the terrain was quite steep and challenging. When I first checked the terrain, I thought it might be too difficult for the robot, but it could just handle all of them. The open stairs were also challenging due to the difficulty of mapping. Because the lidar scan passes through the steps, the robot couldn’t see the stairs properly. But the robot was robust enough to traverse them.
At what point does the robot fall back to proprioceptive locomotion? How does it know if the data its sensors are getting are false or misleading? And how much does proprioceptive locomotion impact performance or capabilities?
Miki: We think the robot detects if the exteroception matches the proprioception through its feet contact or feet positions. If the map is correct, the feet get contact where the map suggests. Then the controller recognizes that the exteroception is correct and makes use of it. Once it experiences that the feet contact doesn’t match with the ground on the map, or the feet go below the map, it recognizes that exteroception is unreliable, and relies more on proprioception. We showed this in this supplementary video experiment:

Supplementary Robustness Evaluation

youtu.be

However, since we trained the neural network in an end-to-end manner, where the student policy just tries to follow the teacher’s action by trying to capture the necessary information in its belief state, we can only guess how it knows. In the initial approach, we were just directly inputting exteroception into the control policy. In this setup, the robot could walk over obstacles and stairs in the lab environment, but once we went outside, it failed due to mapping failures. Therefore, combining with proprioception was critical to achieve robustness.
How much are you constrained by the physical performance of the robot itself? If the robot were stronger or faster, would you be able to take advantage of that?
Miki: When we use reinforcement learning, the policy usually tries to use as much torque and speed as it is allowed to use. Therefore if the robot was stronger or faster, we think we could increase robustness further and overcome more challenging obstacles with faster speed.
What remains challenging, and what are you working on next?
Miki: Currently, we steered the robot manually for most of the experiments (except DARPA SubT Challenge). Adding more levels of autonomy is the next goal. As mentioned above, we want the robot to complete a difficult hike without human operators. Furthermore, there is big room for improvements in the locomotion capability of the robot. For “harsher” terrains, we want the robot to perceive the world in 3D and manifest richer behaviors such as jumping over stepping-stones or crawling under overhanging obstacles, which is not possible with current 2.5D elevation map. Continue reading

Posted in Human Robots

#440364 Sensor-Packed ‘Electronic Skin’ ...

Being able to beam yourself into a robotic body has all kinds of applications, from the practical to the fanciful. Existing interfaces that could make this possible tend to be bulky, but a wireless electronic skin made by Chinese researchers promises far more natural control.

While intelligent robots may one day be able to match humans’ dexterity and adaptability, they still struggle to carry out many of the tasks we’d like them to be able to do. In the meantime, many believe that creating ways for humans to teleoperate robotic bodies could be a useful halfway house.

The approach could be particularly useful for scenarios that are hazardous for humans yet still beyond the capabilities of autonomous robots. For instance, bomb disposal or radioactive waste cleanup, or more topically, medical professionals treating highly infectious patients.

While remote-controlled robots already exist, being able to control them through natural body movements could make the experience far more intuitive. It could also be crucial for developing practical robotic exoskeletons and better prosthetics, and even make it possible to create immersive entertainment experiences where users take control of a robotic body.

While solutions exist for translating human movement into signals for robots, it typically involves the use of cumbersome equipment that the user has to wear or complicated computer vision systems.

Now, a team of researchers from China has created a flexible electronic skin packed with sensors, wireless transmitters, and tiny vibrating magnets that can provide haptic feedback to the user. By attaching these patches to various parts of the body like the hand, forearm, or knee, the system can record the user’s movements and transmit them to robotic devices.

The research, described in a paper published in Science Advances, builds on rapid advances in flexible electronics in recent years, but its major contribution is packing many components into a compact, powerful, and user-friendly package.

The system’s sensors rely on piezoresistive materials, whose electrical resistance changes when subjected to mechanical stress. This allows them to act as bending sensors, so when the patches are attached to a user’s joint the change in resistance corresponds to the angle at which it is bent.

These sensors are connected to a central microcontroller via wiggly copper wires that wave up and down in a snake-like fashion. This zigzag pattern allows the wires to easily expand when stretched or bent, preventing them from breaking under stress. The voltage signals from the sensors are then processed and transmitted via Bluetooth, either directly to a nearby robotic device or a computer, which can then pass them on via a local network or the internet.

Crucially, the researchers have also built in a feedback system. The same piezoresistive sensors can be attached to parts of the robotic device, for instance on the fingertips where they can act as pressure sensors.

Signals from these sensors are transmitted to the electronic skin, where they are used to control tiny magnets that vibrate at different frequencies depending on how much pressure was applied. The researchers showed that humans controlling a robotic hand could use the feedback to distinguish between cubes of rubber with varying levels of hardness.

Importantly, the response time for the feedback signals was as low as 4 microseconds while operating directly over Bluetooth and just 350 microseconds operating over a local Wi-Fi network, which is below the 550 microseconds it takes for humans to react to tactile stimuli. Transmitting the signals over the internet led to considerably longer response times, though—between 30 and 50 milliseconds.

Nonetheless, the researchers showed that by combining different configurations of patches with visual feedback from VR goggles, human users could control a remote-controlled car with their fingers, use a robotic arm to carry out a COVID swab test, and even get a basic humanoid robot to walk, squat, clean a room, and help nurse a patient.

The patches are powered by an onboard lithium-ion battery that provides enough juice for all of its haptic feedback devices to operate continuously at full power for more than an hour. In standby mode it can last for nearly two weeks, and the device’s copper wires can even act as an antenna to wirelessly recharge the battery.

Inevitably, the system will still require considerable finessing before it can be used in real-world settings. But its impressive capabilities and neat design suggest that unobtrusive flexible sensors that could let us remotely control robots might not be too far away.

Image Credit: geralt / 23811 images Continue reading

Posted in Human Robots

#440359 Video Friday: Guitar Bot

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, GermanyCLAWAR 2022: 12–14 September 2022, Açores, PortugalLet us know if you have suggestions for next week, and enjoy today's videos.
Robotics. It's a wicked game.

[ GA Tech ]
This experiment demonstrated the latest progress of the flying humanoid robot Jet-HR2. The new control strategy allows the robot to hover with position feedback from the motion-capture system. Video demonstrates the robot's ability to remain stable hovering in midair for more than 20 seconds.
[ YouTube ]
Thanks, Zhifeng!
This super cool soft robotic finger from TU Berlin is able to read Braille with astonishing accuracy by using sound as a sensor.

[ TU Berlin ]
Cassie Blue navigates around furniture used as obstacles in the Ford Robotics Building at the University of Michigan. All the clips in this video are magnified 1x on purpose to show Cassie's motion.
[ Michigan Robotics ]
Thanks, Bruce!
Tapomayukh Bhattacharjee received a National Science Foundation (NSF) National Robotics Initiative (NBI) collaborative grant for a project that aims to address—and ameliorate—the way people with mobility issues are given a chance for improved control and independence over their environments, especially in how they are fed—or better, how they can feed themselves with robotic assistance.
[ Cornell ]
A novel quadcopter capable of changing shape midflight is presented, allowing for operation in four configurations with the capability of sustained hover in three.
[ HiPeR Lab ]
Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own.
[ EPFL ]
The MRV is SpaceLogistics’ next-generation on-orbit servicing vehicle, incorporating a robotic arm payload developed and integrated by the U.S. Naval Research Laboratory and provided by the U.S. Defense Advanced Research Projects Agency. In this test of Flight Robotic Arm System 1, the robotic arm is executing an exercise called the Gauntlet, which moves the arm through a series of poses that exercise the full motion of all seven degrees of freedom.
[ Northrop Grumman ]
You almost certainly can't afford it, but the Shadow Robot Co. would like to remind you that the Shadow Hand is for sale.

[ Shadow ]
Join ESA astronaut Matthias Maurer inside Kibo, the Japanese laboratory module of the International Space Station in 360°, setting up Astrobee free-flying robots for the ReSWARM (RElative Satellite sWArming and Robotic Maneuvering) experiment. This robotics demonstration tests autonomous microgravity motion planning and control for on-orbit assembly and coordinated motion.
[ NASA ]
Boeing's MQ-25 autonomous aerial tanker continues its U.S. Navy carrier testing.

[ Boeing ]
Sphero Sports is built for sports foundations, schools, and CSR-driven organizations to teach STEM subjects. Sphero Sports gets students excited about STEM education and proactively supports educators and soccer foundation staff to become comfortable in learning and teaching these critical skills.
[ Sphero ]
Adibot-A is Ubtech Robotics' fully loaded autonomous disinfection solution, which can be programmed and mapped to independently navigate one or multiple floor plans.
[ UBTECH ]
Survice Engineering Co. was proud to support the successful completion of the Unmanned Logistics System–Air (ULS-A) Joint Capability Technology Demonstration (JCTD) program as the lead system integrator. We worked with the U.S. government, leaders in autonomous unmanned systems, and our warfighters to develop, test, and evaluate the latest multirotor VTOL platforms and technologies for assured logistics resupply at the forward edge of the battlefield.
[ SURVICE ] via [ Malloy Aeronautics ]
Thanks, Chris!
Yaqing Wang from JHU's Terradynamics Lab gives a talk on trying to make a robot that is anywhere near as talented as a cockroach.

[ Terradynamics Lab ]
In episode one of season two of the Robot Brains podcast, host Pieter Abbeel is joined by guest (and close collaborator) Sergey Levine, professor at UC Berkeley, EECS. Sergey discusses the early years of his career, how Andrew Ng influenced his interest in machine learning, his current projects, and his lab's recent accomplishments.
[ The Robot Brains ]
Thanks, Alice! Continue reading

Posted in Human Robots

#440357 This Autonomous Delivery Robot Has ...

Autonomous delivery was already on multiple companies’ research and development agenda before the pandemic, but when people stopped wanting to leave their homes it took on a whole new level of urgency (and potential for profit). Besides the fact that the pandemic doesn’t seem to be subsiding—note the continuous parade of new Greek-letter variants—our habits have been altered in a lasting way, with more people shopping online and getting groceries and other items delivered to their homes.

This week Nuro, a robotics company based in Mountain View, California unveiled what it hopes will be a big player in last-mile delivery. The company’s third-generation autonomous delivery vehicle has some impressive features, and some clever ones—like external airbags that deploy if the vehicle hits a pedestrian (which hopefully won’t happen too often, if ever).

Despite being about 20 percent smaller in width than the average sedan, the delivery bot has 27 cubic feet of space inside; for comparison’s sake, the tiny SmartForTwo has 12.4 cubic feet of cargo space, while the Tesla Model S has 26. It can carry up to 500 pounds and move at a speed of 45 miles per hour.

Image Credit: Nuro
Nuro has committed to minimizing its environmental footprint—the delivery bot runs on batteries, and according to the press release, the company will use 100 percent renewable electricity from wind farms in Texas to power the fleet (though it’s unclear how they’ll do this, as Texas is pretty far from northern California, and that’s where the vehicles will initially be operating; Nuro likely buys credits that go towards expanding wind energy in Texas).

Nuro’s first delivery bot was unveiled in 2018, followed by a second iteration in 2019. The company recently partnered with 7-Eleven to do autonomous deliveries in its hometown (Mountain View) using this second iteration, called the R2—though in the initial phase of the service, deliveries will be made by autonomous Priuses.

The newest version of the bot is equipped with sensors that can tell the difference between a pile of leaves and an animal, as well as how many pedestrians are standing at a crosswalk in dense fog. Nuro says the vehicle “was designed to feel like a friendly member of the community.” This sounds a tad dystopian—it is, after all, an autonomous robot on wheels—but the intention is in the right place. Customers will retrieve their orders and interact with the bot using a large exterior touchscreen.

Whether an optimal future is one where any product we desire can be delivered to our door within hours or minutes is a debate all its own, but it seems that’s the direction we’re heading in. Nuro will have plenty of competition in the last-mile delivery market, potentially including an Amazon system that releases multiple small wheeled robots from a large truck (Amazon patented the concept last year, but there’s been no further word about whether they’re planning to trial it). Nuro is building a manufacturing facility and test track in Nevada, and is currently in the pre-production phase.

Image Credit: Nuro Continue reading

Posted in Human Robots