Tag Archives: motion

#435683 How High Fives Help Us Get in Touch With ...

The human sense of touch is so naturally ingrained in our everyday lives that we often don’t notice its presence. Even so, touch is a crucial sensing ability that helps people to understand the world and connect with others. As the market for robots grows, and as robots become more ingrained into our environments, people will expect robots to participate in a wide variety of social touch interactions. At Oregon State University’s Collaborative Robotics and Intelligent Systems (CoRIS) Institute, I research how to equip everyday robots with better social-physical interaction skills—from playful high-fives to challenging physical therapy routines.

Some commercial robots already possess certain physical interaction skills. For example, the videoconferencing feature of mobile telepresence robots can keep far-away family members connected with one another. These robots can also roam distant spaces and bump into people, chairs, and other remote objects. And my Roomba occasionally tickles my toes before turning to vacuum a different area of the room. As a human being, I naturally interpret this (and other Roomba behaviors) as social, even if they were not intended as such. At the same time, for both of these systems, social perceptions of the robots’ physical interaction behaviors are not well understood, and these social touch-like interactions cannot be controlled in nuanced ways.

Before joining CoRIS early this year, I was a postdoc at the University of Southern California’s Interaction Lab, and prior to that, I completed my doctoral work at the GRASP Laboratory’s Haptics Group at the University of Pennsylvania. My dissertation focused on improving the general understanding of how robot control and planning strategies influence perceptions of social touch interactions. As part of that research, I conducted a study of human-robot hand-to-hand contact, focusing on an interaction somewhere between a high five and a hand-clapping game. I decided to study this particular interaction because people often high five, and they will likely expect robots in everyday spaces to high five as well!

I conducted a study of human-robot hand-to-hand contact, focusing on an interaction somewhere between a high five and a hand-clapping game. I decided to study this particular interaction because people often high five, and they will likely expect robots to high five as well!

The implications of motion and planning on the social touch experience in these interactions is also crucial—think about a disappointingly wimpy (or triumphantly amazing) high five that you’ve experienced in the past. This great or terrible high-fiving experience could be fleeting, but it could also influence who you interact with, who you’re friends with, and even how you perceive the character or personalities of those around you. This type of perception, judgement, and response could extend to personal robots, too!

An investigation like this requires a mixture of more traditional robotics research (e.g., understanding how to move and control a robot arm, developing models of the desired robot motion) along with techniques from design and psychology (e.g., performing interviews with research participants, using best practices from experimental methods in perception). Enabling robots with social touch abilities also comes with many challenges, and even skilled humans can have trouble anticipating what another person is about to do. Think about trying to make satisfying hand contact during a high five—you might know the classic adage “watch the elbow,” but if you’re like me, even this may not always work.

I conducted a research study involving eight different types of human-robot hand contact, with different combinations of the following: interactions with a facially reactive or non-reactive robot, a physically reactive or non-reactive planning strategy, and a lower or higher robot arm stiffness. My robotic system could become facially reactive by changing its facial expression in response to hand contact, or physically reactive by updating its plan of where to move next after sensing hand contact. The stiffness of the robot could be adjusted by changing a variable that controlled how quickly the robot’s motors tried to pull its arm to the desired position. I knew from previous research that fine differences in touch interactions can have a big impact on perceived robot character. For example, if a robot grips an object too tightly or for too long while handing an object to a person, it might be perceived as greedy, possessive, or perhaps even Sméagol-like. A robot that lets go too soon might appear careless or sloppy.

In the example cases of robot grip, it’s clear that understanding people’s perceptions of robot characteristics and personality can help roboticists choose the right robot design based on the proposed operating environment of the robot. I likewise wanted to learn how the facial expressions, physical reactions, and stiffness of a hand-clapping robot would influence human perceptions of robot pleasantness, energeticness, dominance, and safety. Understanding this relationship can help roboticists to equip robots with personalities appropriate for the task at hand. For example, a robot assisting people in a grocery store may need to be designed with a high level of pleasantness and only moderate energy, while a maximally effective robot for comedy roast battles may need high degrees of energy and dominance above all else.

After many a late night at the GRASP Lab clapping hands with a big red robot, I was ready to conduct the study. Twenty participants visited the lab to clap hands with our Baxter Research Robot and help me begin to understand how characteristics of this humanoid robot’s social touch influenced its pleasantness, energeticness, dominance, and apparent safety. Baxter interacted with participants using a custom 3D-printed hand that was inlaid with silicone inserts.

The study showed that a facially reactive robot seemed more pleasant and energetic. A physically reactive robot seemed less pleasant, energetic, and dominant for this particular study design and interaction. I thought contact with a stiffer robot would seem harder (and therefore more dominant and less safe), but counter to my expectations, a stiffer-armed robot seemed safer and less dominant to participants. This may be because the stiffer robot was more precise in following its pre-programmed trajectory, therefore seeming more predictable and less free-spirited.

Safety ratings of the robot were generally high, and several participants commented positively on the robot’s facial expressions. Some participants attributed inventive (and non-existent) intelligences to the robot—I used neither computer vision nor the Baxter robot’s cameras in this study, but more than one participant complimented me on how well the robot tracked their hand position. While interacting with the robot, participants displayed happy facial expressions more than any other analyzed type of expression.

Photo: Naomi Fitter

Participants were asked to clap hands with Baxter and describe how they perceived the robot in terms of its pleasantness, energeticness, dominance, and apparent safety.

Circling back to the idea of how people might interpret even rudimentary and practical robot behaviors as social, these results show that this type of social perception isn’t just true for my lovable (but sometimes dopey) Roomba, but also for collaborative industrial robots, and generally, any robot capable of physical human-robot interaction. In designing the motion of Baxter, the adjustment of a single number in the equation that controls joint stiffness can flip the robot from seeming safe and docile to brash and commanding. These implications are sometimes predictable, but often unexpected.

The results of this particular study give us a partial guide to manipulating the emotional experience of robot users by adjusting aspects of robot control and planning, but future work is needed to fully understand the design space of social touch. Will materials play a major role? How about personalized machine learning? Do results generalize over all robot arms, or even a specialized subset like collaborative industrial robot arms? I’m planning to continue answering these questions, and when I finally solve human-robot social touch, I’ll high five all my robots to celebrate.

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy. Continue reading

Posted in Human Robots

#435662 Video Friday: This 3D-Printed ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.

We’re used to seeing bristle bots about the size of a toothbrush head (which is not a coincidence), but Georgia Tech has downsized them, with some interesting benefits.

Researchers have created a new type of tiny 3D-printed robot that moves by harnessing vibration from piezoelectric actuators, ultrasound sources or even tiny speakers. Swarms of these “micro-bristle-bots” might work together to sense environmental changes, move materials – or perhaps one day repair injuries inside the human body.

The prototype robots respond to different vibration frequencies depending on their configurations, allowing researchers to control individual bots by adjusting the vibration. Approximately two millimeters long – about the size of the world’s smallest ant – the bots can cover four times their own length in a second despite the physical limitations of their small size.

“We are working to make the technology robust, and we have a lot of potential applications in mind,” said Azadeh Ansari, an assistant professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. “We are working at the intersection of mechanics, electronics, biology and physics. It’s a very rich area and there’s a lot of room for multidisciplinary concepts.”

[ Georgia Tech ]

Most consumer drones are “multi-copters,” meaning that they have a series of rotors or propellers that allow them to hover like helicopters. But having rotors severely limits their energy efficiency, which means that they can’t easily carry heavy payloads or fly for long periods of time. To get the best of both worlds, drone designers have tried to develop “hybrid” fixed-wing drones that can fly as efficiently as airplanes, while still taking off and landing vertically like multi-copters.

These drones are extremely hard to control because of the complexity of dealing with their flight dynamics, but a team from MIT CSAIL aims to make the customization process easier, with a new system that allows users to design drones of different sizes and shapes that can nimbly switch between hovering and gliding – all by using a single controller.

In future work, the team plans to try to further increase the drone’s maneuverability by improving its design. The model doesn’t yet fully take into account complex aerodynamic effects between the propeller’s airflow and the wings. And lastly, their method trained the copter with “yaw velocity” set at zero, which means that it cannot currently perform sharp turns.

[ Paper ] via [ MIT ]

We’re not quite at the point where we can 3D print entire robots, but UCSD is getting us closer.

The UC San Diego researchers’ insight was twofold. They turned to a commercially available printer for the job, (the Stratasys Objet350 Connex3—a workhorse in many robotics labs). In addition, they realized one of the materials used by the 3D printer is made of carbon particles that can conduct power to sensors when connected to a power source. So roboticists used the black resin to manufacture complex sensors embedded within robotic parts made of clear polymer. They designed and manufactured several prototypes, including a gripper.

When stretched, the sensors failed at approximately the same strain as human skin. But the polymers the 3D printer uses are not designed to conduct electricity, so their performance is not optimal. The 3D printed robots also require a lot of post-processing before they can be functional, including careful washing to clean up impurities and drying.

However, researchers remain optimistic that in the future, materials will improve and make 3D printed robots equipped with embedded sensors much easier to manufacture.

[ UCSD ]

Congrats to Team Homer from the University of Koblenz-Landau, who won the RoboCup@Home world championship in Sydney!

[ Team Homer ]

When you’ve got a robot with both wheels and legs, motion planning is complicated. IIT has developed a new planner for CENTAURO that takes advantage of the different ways that the robot is able to get past obstacles.

[ Centauro ]

Thanks Dimitrios!

If you constrain a problem tightly enough, you can solve it even with a relatively simple robot. Here’s an example of an experimental breakfast robot named “Loraine” that can cook eggs, bacon, and potatoes using what looks to be zero sensing at all, just moving to different positions and actuating its gripper.

There’s likely to be enough human work required in the prep here to make the value that the robot adds questionable at best, but it’s a good example of how you can make a relatively complex task robot-compatible as long as you set it up in just the right way.

[ Connected Robotics ] via [ RobotStart ]

It’s been a while since we’ve seen a ball bot, and I’m not sure that I’ve ever seen one with a manipulator on it.

[ ETH Zurich RSL ]

Soft Robotics’ new mini fingers are able to pick up taco shells without shattering them, which as far as I can tell is 100 percent impossible for humans to do.

[ Soft Robotics ]

Yes, Starship’s wheeled robots can climb curbs, and indeed they have a pretty neat way of doing it.

[ Starship ]

Last year we posted a long interview with Christoph Bartneck about his research into robots and racism, and here’s a nice video summary of the work.

[ Christoph Bartneck ]

Canada’s contribution to the Lunar Gateway will be a smart robotic system which includes a next-generation robotic arm known as Canadarm3, as well as equipment, and specialized tools. Using cutting-edge software and advances in artificial intelligence, this highly-autonomous system will be able to maintain, repair and inspect the Gateway, capture visiting vehicles, relocate Gateway modules, help astronauts during spacewalks, and enable science both in lunar orbit and on the surface of the Moon.

[ CSA ]

An interesting demo of how Misty can integrate sound localization with other services.

[ Misty Robotics ]

The third and last period of H2020 AEROARMS project has brought the final developments in industrial inspection and maintenance tasks, such as the crawler retrieval and deployment (DLR) or the industrial validation in stages like a refinery or a cement factory.

[ Aeroarms ]

The Guardian S remote visual inspection and surveillance robot navigates a disaster training site to demonstrate its advanced maneuverability, long-range wireless communications and extended run times.

[ Sarcos ]

This appears to be a cake frosting robot and I wish I had like 3 more hours of this to share:

Also here is a robot that picks fried chicken using a curiously successful technique:

[ Kazumichi Moriyama ]

This isn’t strictly robots, but professor Hiroshi Ishii, associate director of the MIT Media Lab, gave a fascinating SIGCHI Lifetime Achievement Talk that’s absolutely worth your time.

[ Tangible Media Group ] Continue reading

Posted in Human Robots

#435658 Video Friday: A Two-Armed Robot That ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.

I’m sure you’ve seen this video already because you read this blog every day, but if you somehow missed it because you were skiing across Antarctica (the only valid excuse we’re accepting today), here’s our video introducing HMI’s Aquanaut transforming robot submarine.

And after you recover from all that frostbite, make sure and read our in-depth feature article here.

[ Aquanaut ]

Last week we complained about not having seen a ballbot with a manipulator, so Roberto from CMU shared a new video of their ballbot, featuring a pair of 7-DoF arms.

We should learn more at Humanoids 2019.

[ CMU ]

Thanks Roberto!

The FAA is making it easier for recreational drone pilots to get near-realtime approval to fly in lightly controlled airspace.

[ LAANC ]

Self-reconfigurable modular robots are usually composed of multiple modules with uniform docking interfaces that can be transformed into different configurations by themselves. The reconfiguration planning problem is finding what sequence of reconfiguration actions are required for one arrangement of modules to transform into another. We present a novel reconfiguration planning algorithm for modular robots. The algorithm compares the initial configuration with the goal configuration efficiently. The reconfiguration actions can be executed in a distributed manner so that each module can efficiently finish its reconfiguration task which results in a global reconfiguration for the system. In the end, the algorithm is demonstrated on real modular robots and some example reconfiguration tasks are provided.

[ CKbot ]

A nice design of a gripper that uses a passive thumb of sorts to pick up flat objects from flat surfaces.

[ Paper ] via [ Laval University ]

I like this video of a palletizing robot from Kawasaki because in the background you can see a human doing the exact same job and obviously not enjoying it.

[ Kawasaki ]

This robot cleans and “brings joy and laughter.” What else do we need?

I do appreciate that all the robots are named Leo, and that they’re also all female.

[ LionsBot ]

This is less of a dishwashing robot and more of a dishsorting robot, but we’ll forgive it because it doesn’t drop a single dish.

[ TechMagic ]

Thanks Ryosuke!

A slight warning here that the robot in the following video (which costs something like $180,000) appears “naked” in some scenes, none of which are strictly objectionable, we hope.

Beautifully slim and delicate motion life-size motion figures are ideal avatars for expressing emotions to customers in various arts, content and businesses. We can provide a system that integrates not only motion figures but all moving devices.

[ Speecys ]

The best way to operate a Husky with a pair of manipulators on it is to become the robot.

[ UT Austin ]

The FlyJacket drone control system from EPFL has been upgraded so that it can yank you around a little bit.

In several fields of human-machine interaction, haptic guidance has proven to be an effective training tool for enhancing user performance. This work presents the results of psychophysical and motor learning studies that were carried out with human participant to assess the effect of cable-driven haptic guidance for a task involving aerial robotic teleoperation. The guidance system was integrated into an exosuit, called the FlyJacket, that was developed to control drones with torso movements. Results for the Just Noticeable Difference (JND) and from the Stevens Power Law suggest that the perception of force on the users’ torso scales linearly with the amplitude of the force exerted through the cables and the perceived force is close to the magnitude of the stimulus. Motor learning studies reveal that this form of haptic guidance improves user performance in training, but this improvement is not retained when participants are evaluated without guidance.

[ EPFL ]

The SAND Challenge is an opportunity for small businesses to compete in an autonomous unmanned aerial vehicle (UAV) competition to help NASA address safety-critical risks associated with flying UAVs in the national airspace. Set in a post-natural disaster scenario, SAND will push the envelope of aviation.

[ NASA ]

Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to predict properties of the terrain ahead of the robot. In this work, we propose a method to collect data from robot-terrain interaction and associate it to images, to then train a neural network to predict terrain properties from images.

[ RSL ]

Misty wants to be your new receptionist.

[ Misty Robotics ]

For years, we’ve been pointing out that while new Roombas have lots of great features, older Roombas still do a totally decent job of cleaning your floors. This video is a performance comparison between the newest Roomba (the S9+) and the original 2002 Roomba (!), and the results will surprise you. Or maybe they won’t.

[ Vacuum Wars ]

Lex Fridman from MIT interviews Chris Urmson, who was involved in some of the earliest autonomous vehicle projects, Google’s original self-driving car among them, and is currently CEO of Aurora Innovation.

Chris Urmson was the CTO of the Google Self-Driving Car team, a key engineer and leader behind the Carnegie Mellon autonomous vehicle entries in the DARPA grand challenges and the winner of the DARPA urban challenge. Today he is the CEO of Aurora Innovation, an autonomous vehicle software company he started with Sterling Anderson, who was the former director of Tesla Autopilot, and Drew Bagnell, Uber’s former autonomy and perception lead.

[ AI Podcast ]

In this week’s episode of Robots in Depth, Per speaks with Lael Odhner from RightHand Robotics.

Lael Odhner is a co-founder of RightHand Robotics, that is developing a gripper based on the combination of control and soft, compliant parts to get better grasping of objects. Their work focuses on grasping and manipulating everyday human objects in everyday environments.This mimics how human hands combine control and flexibility to grasp objects with great dexterity.

The combination of control and compliance makes the RightHand robotics gripper very light-weight and affordable. The compliance makes it easier to grasp objects of unknown shape and differs from the way industrial robots usually grip. The compliance also helps in a more unstructured environment where contact with the object and its surroundings cannot be exactly predicted.

[ RightHand Robotics ] via [ Robots in Depth ] Continue reading

Posted in Human Robots

#435640 Video Friday: This Wearable Robotic Tail ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:

Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.

The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.

[ Lakshmi Nair ]

Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.

This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.

[ IIT ]

Thanks Victor!

You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!

The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.

[ Paper ] via [ Gizmodo ]

The noises in this video are fantastic.

[ ESA ]

Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.

[ MIT CSAIL ]

Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…

[ Skydio ]

The only thing more fun than watching robots is watching people react to robots.

[ SEER ]

There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.

[ Stanford ]

#autonomousicecreamtricycle

In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:

Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.

[ Roboy ]

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.

[ ROAR Lab ]

During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.

[ DARPA ]

I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.

[ Ghost Robotics ]

If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.

[ AP ]

As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.

The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.

[ Paper ]

Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.

[ GITAI ]

Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.

[ MSL ]

Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:

And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:

All of the ROSCon FR talks are available on Vimeo.

[ ROSCon FR ] Continue reading

Posted in Human Robots

#435621 ANYbotics Introduces Sleek New ANYmal C ...

Quadrupedal robots are making significant advances lately, and just in the past few months we’ve seen Boston Dynamics’ Spot hauling a truck, IIT’s HyQReal pulling a plane, MIT’s MiniCheetah doing backflips, Unitree Robotics’ Laikago towing a van, and Ghost Robotics’ Vision 60 exploring a mine. Robot makers are betting that their four-legged machines will prove useful in a variety of applications in construction, security, delivery, and even at home.

ANYbotics has been working on such applications for years, testing out their ANYmal robot in places where humans typically don’t want to go (like offshore platforms) as well as places where humans really don’t want to go (like sewers), and they have a better idea than most companies what can make quadruped robots successful.

This week, ANYbotics is announcing a completely new quadruped platform, ANYmal C, a major upgrade from the really quite research-y ANYmal B. The new quadruped has been optimized for ruggedness and reliability in industrial environments, with a streamlined body painted a color that lets you know it means business.

ANYmal C’s physical specs are pretty impressive for a production quadruped. It can move at 1 meter per second, manage 20-degree slopes and 45-degree stairs, cross 25-centimeter gaps, and squeeze through passages just 60 centimeters wide. It’s packed with cameras and 3D sensors, including a lidar for 3D mapping and simultaneous localization and mapping (SLAM). All these sensors (along with the vast volume of gait research that’s been done with ANYmal) make this one of the most reliably autonomous quadrupeds out there, with real-time motion planning and obstacle avoidance.

Image: ANYbotics

ANYmal can autonomously attach itself to a cone-shaped docking station to recharge.

ANYmal C is also one of the ruggedest legged robots in existence. The 50-kilogram robot is IP67 rated, meaning that it’s completely impervious to dust and can withstand being submerged in a meter of water for an hour. If it’s submerged for longer than that, you’re absolutely doing something wrong. The robot will run for over 2 hours on battery power, and if that’s not enough endurance, don’t worry, because ANYmal can autonomously impale itself on a weird cone-shaped docking station to recharge.

Photo: ANYbotics

ANYmal C’s sensor payload includes cameras and a lidar for 3D mapping and SLAM.

As far as what ANYmal C is designed to actually do, it’s mostly remote inspection tasks where you need to move around through a relatively complex environment, but where for whatever reason you’d be better off not sending a human. ANYmal C has a sensor payload that gives it lots of visual options, like thermal imaging, and with the ability to handle a 10-kilogram payload, the robot can be adapted to many different environments.

Over the next few months, we’re hoping to see more examples of ANYmal C being deployed to do useful stuff in real-world environments, but for now, we do have a bit more detail from ANYbotics CTO Christian Gehring.

IEEE Spectrum: Can you tell us about the development process for ANYmal C?

Christian Gehring: We tested the previous generation of ANYmal (B) in a broad range of environments over the last few years and gained a lot of insights. Based on our learnings, it became clear that we would have to re-design the robot to meet the requirements of industrial customers in terms of safety, quality, reliability, and lifetime. There were different prototype stages both for the new drives and for single robot assemblies. Apart from electrical tests, we thoroughly tested the thermal control and ingress protection of various subsystems like the depth cameras and actuators.

What can ANYmal C do that the previous version of ANYmal can’t?

ANYmal C was redesigned with a focus on performance increase regarding actuation (new drives), computational power (new hexacore Intel i7 PCs), locomotion and navigation skills, and autonomy (new depth cameras). The new robot additionally features a docking system for autonomous recharging and an inspection payload as an option. The design of ANYmal C is far more integrated than its predecessor, which increases both performance and reliability.

How much of ANYmal C’s development and design was driven by your experience with commercial or industry customers?

Tests (such as the offshore installation with TenneT) and discussions with industry customers were important to get the necessary design input in terms of performance, safety, quality, reliability, and lifetime. Most customers ask for very similar inspection tasks that can be performed with our standard inspection payload and the required software packages. Some are looking for a robot that can also solve some simple manipulation tasks like pushing a button. Overall, most use cases customers have in mind are realistic and achievable, but some are really tough for the robot, like climbing 50° stairs in hot environments of 50°C.

Can you describe how much autonomy you expect ANYmal C to have in industrial or commercial operations?

ANYmal C is primarily developed to perform autonomous routine inspections in industrial environments. This autonomy especially adds value for operations that are difficult to access, as human operation is extremely costly. The robot can naturally also be operated via a remote control and we are working on long-distance remote operation as well.

Do you expect that researchers will be interested in ANYmal C? What research applications could it be useful for?

ANYmal C has been designed to also address the needs of the research community. The robot comes with two powerful hexacore Intel i7 computers and can additionally be equipped with an NVIDIA Jetson Xavier graphics card for learning-based applications. Payload interfaces enable users to easily install and test new sensors. By joining our established ANYmal Research community, researchers get access to simulation tools and software APIs, which boosts their research in various areas like control, machine learning, and navigation.

[ ANYmal C ] Continue reading

Posted in Human Robots