Tag Archives: to

#439564 Video Friday: NASA Sending Robots to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers.

It’s ICRA this week, but since the full proceedings are not yet available, we’re going to wait until we can access everything to cover the conference properly. Or, as properly as we can not being in Xi’an right now.

We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

NASA has selected the DAVINCI+ (Deep Atmosphere Venus Investigation of Noble-gases, Chemistry and Imaging +) mission as part of its Discovery program, and it will be the first spacecraft to enter the Venus atmosphere since NASA’s Pioneer Venus in 1978 and USSR’s Vega in 1985.

The mission, Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus, will consist of a spacecraft and a probe. The spacecraft will track motions of the clouds and map surface composition by measuring heat emission from Venus’ surface that escapes to space through the massive atmosphere. The probe will descend through the atmosphere, sampling its chemistry as well as the temperature, pressure, and winds. The probe will also take the first high-resolution images of Alpha Regio, an ancient highland twice the size of Texas with rugged mountains, looking for evidence that past crustal water influenced surface materials.

Launch is targeted for FY2030.

[ NASA ]

Skydio has officially launched their 3D Scan software, turning our favorite fully autonomous drone into a reality capture system.

Skydio held a launch event at the U.S. Space & Rocket Center and the keynote is online; it's actually a fairly interesting 20 minutes with some cool rockets thrown in for good measure.

[ Skydio ]

Space robotics is a key technology for space exploration and an enabling factor for future missions, both scientific and commercial. Underwater tests are a valuable tool for validating robotic technologies for space. In DFKI’s test basin, even large robots can be tested in simulated micro-gravity with mostly unrestricted range of motion.

[ DFKI ]

The Harvard Microrobotics Lab has developed a soft robotic hand with dexterous soft fingers capable of some impressive in-hand manipulation, starting (obviously) with a head of broccoli.

Training soft robots in simulation has been a bit of a challenge, but the researchers developed their own simulation framework that matches the real world pretty closely:

The simulation framework is avilable to download and use, and you can do some nutty things with it, like simulating tentacle basketball:

I’d pay to watch that IRL.

[ Paper ] via [ Harvard ]

Using the navigation cameras on its mast, NASA’s Curiosity Mars rover this movie of clouds just after sunset on March 28, 2021, the 3,072nd so, or Martian day, of the mission. These noctilucent, or twilight clouds, are made of water ice; ice crystals reflect the setting sun, allowing the detail in each cloud to be seen more easily.

[ JPL ]

Genesis Robotics is working on something, and that's all we know.

[ Genesis Robotics ]

To further improve the autonomous capabilities of future space robots and to advance European efforts in this field, the European Union funded the ADE project, which was completed recently in Wulsbüttel near Bremen. There, the rover “SherpaTT” of the German Research Center for Artificial Intelligence (DFKI) managed to autonomously cover a distance of 500 meters in less than three hours thanks to the successful collaboration of 14 European partners.

[ DFKI ]

For $6.50, a NEXTAGE robot will make an optimized coffee for you. In Japan, of course.

[ Impress ]

Things I’m glad a robot is doing so that I don’t have to: dross skimming.

[ Fanuc ]

Today, anyone can hail a ride to experience the Waymo Driver with our fully autonomous ride-hailing service, Waymo One. Riders Ben and Ida share their experience on one of their recent multi-stop rides. Watch as they take us along for a ride.

[ Waymo ]

The IEEE Robotics and Automation Society Town Hall 2021 featured discussion around Diversity & Inclusion, RAS CARES committee & Code of Conduct, Gender Diversity, and the Developing Country Faculty Engagement Program.

[ IEEE RAS ] Continue reading

Posted in Human Robots

#439537 Tencent’s New Wheeled Robot Flicks Its ...

Ollie (I think its name is Ollie) is a “a novel wheel-legged robot” from Tencent Robotics. The word “novel” is used quite appropriately here, since Ollie sports some unusual planar parallel legs atop driven wheels. It’s also got a multifunctional actuated tail that not only enables some impressive acrobatics, but also allows the robot to transition from biped-ish to triped-ish to stand up extra tall and support a coffee-carrying manipulator.

It’s a little disappointing that the tail only appears to be engaged for specific motions—it doesn’t seem like it’s generally part of the robot’s balancing or motion planning, which feels like a missed opportunity. But this robot is relatively new, and its development is progressing rapidly, which we know because an earlier version of the hardware and software was presented at ICRA 2021 a couple weeks back. Although, to be honest with you, there isn’t a lot of info on the new one besides the above video, so we’ll be learning what we can from the ICRA paper.

The paper is mostly about developing a nonlinear balancing controller for the robot, and they’ve done a bang-up job with it, with the robot remaining steady even while executing sequences of dynamic motions. The jumping and one-legged motions are particularly cool to watch. And, well, that’s pretty much it for the ICRA paper, which (unfortunately) barely addresses the tail at all, except to say that currently the control system assumes that the tail is fixed. We’re guessing that this is just a symptom of the ICRA paper submission deadline being back in October, and that a lot of progress has been made since then.

Seeing the arm and sensor package at the end of the video is a nod to some sort of practical application, and I suppose that the robot’s ability to stand up to reach over that counter is some justification for using it for a delivery task. But it seems like it’s got so much more to offer, you know? Many far more boring platforms robots could be delivering coffee, so let’s find something for this robot to do that involves more backflips.

Balance Control of a Novel Wheel-legged Robot: Design and Experiments, by Shuai Wang, Leilei Cui, Jingfan Zhang, Jie Lai, Dongsheng Zhang, Ke Chen, Yu Zheng, Zhengyou Zhang, and Zhong-Ping Jiang from Tencent Robotics X, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439527 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
Photos: EASE

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439499 Why Robots Can’t Be Counted On to Find ...

On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.

It seems like robots should be ready to help with something like this. But they aren’t.

A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
JOE RAEDLE/GETTY IMAGES

The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.

What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.

To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.

According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.

Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.

And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”

Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.

Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”

The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:

The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.

“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”

Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
Photo: Center for Robot-Assisted Search and Rescue

The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”

Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading

Posted in Human Robots

#439483 Zebra Technologies To Acquire Fetch ...

A company called Zebra Technologies announced this morning that it intends to acquire Fetch Robotics for an impressive $305 million.

Fetch is best known for its autonomous mobile robots (AMRs) for warehouses and boasts “the largest portfolio of AMRs in the industry,” and we’re particular fans of its one-armed mobile manipulator for research. Zebra, meanwhile, does stuff with barcodes (get it?), and has been actively investing in robotics companies with a goal of increasing its footprint in the intelligent industrial automation space.

According to the press release, the acquisition “will provide an innovative offering that drives greater efficiencies and higher ROI through better orchestration of technology and people.” We have no idea what that means, but fortunately, we’ve been able to speak with both Fetch and Zebra for details about the deal.

Fetch Robotics’ $305 million purchase price includes $290 million in cash to snap up the 95% of Fetch that Zebra doesn’t already own—Zebra had already invested in Fetch through Zebra Ventures, which also has Locus Robotics and Plus One robotics in its portfolio. There are still some “customary closing conditions” and regulatory approvals that need to happen, so everything isn’t expected to get wrapped up for another month or so. And when it does, it will in some ways mark the end of a robotics story that we’ve been following for the better part of a decade.

Fetch Robotics was founded in early 2015 by the same team of robot experts who had founded Unbounded Robotics just two years before. The founders all worked at Willow Garage, and Unbounded was a mobile manipulation-focused spin-off of Willow that didn’t pan out for reasons that are still not super clear. But in any case, Fetch was a fresh start that allowed them to fully develop their concept for an intelligent, robust, and efficient autonomous mobile robotic system.

Most of what Fetch Robotics does is warehouse logistics—moving stuff from one place to another so that humans don’t have to. Their autonomous mobile robots work outside of warehouses as well, most recently by providing disinfection services for places like airports. There are plenty of other companies in the larger AMR space, but from what we understand, what Fetch has been doing for the last five years has been consistently state of the art.

This is why Fetch makes sense as an acquisition target, I think: they’ve got exceptional technology in an area (fulfillment, mostly) that has been undergoing a huge amount of growth and where robotics has an enormous opportunity. But what about Zebra Technologies? As far as I can make out, Zebra is one of those companies that you’ve probably never heard of but is actually enormous and everywhere. According to Fortune, as of 2020 they were the 581st biggest company in the world (just behind Levi Strauss) with a market value of $25 billion. While Zebra was founded in 1969, the Zebra-ness didn’t come into play until the early 1980s when they started making barcode printers and scanners. They got into RFID in the early 2000s, and then acquired Motorola’s enterprise unit in 2014, giving Zebra a huge mobile technology portfolio.

To find out where robots fit into all of this, and to learn more about what this means for Fetch, we spoke with Melonee Wise, CEO of Fetch, and Jim Lawton, Vice President and General Manager of Robotics Automation at Zebra.

IEEE Spectrum: Can you tell us about Zebra’s background and interest in robotics?

Jim Lawton: Zebra is a combination of companies that have come together over time. Historically, we were a printing company that made barcode labels, and then we acquired a mobile computing business from Motorola, and today we have a variety of devices that do sensing, analyzing, and acting—we’ve been getting increasingly involved in automation in general.

A lot of our major customers are retailers, warehousing, transportation and logistics, or healthcare, and what we’ve heard a lot lately is that there is an increased pressure towards trying to figure out how to run a supply chain efficiently. Workflows have gotten much more complicated and many of our customers don't feel like they're particularly well equipped to sort through those challenges. They understand that there's an opportunity to do something significant with robots, but what does that look like? What are the right strategies? And they're asking us for help.

There are lots of AMR companies out there doing things that superficially seem similar, but what do you feel is special about Fetch?

Jim Lawton: I was at Universal Robots for a while, and at Rethink Robotics for a number of years, and designing and building robots and bringing them to market is really, really hard. The only way to pull it off is with an amazing team, and Melonee has done an extraordinarily outstanding job, pulling together a world class robotics team.

We had invested in Fetch Robotics a couple of years ago, so we've been working pretty closely together already. We invest in companies in part so that we can educate ourselves, but it's also an opportunity to see whether we’re a good fit with each other. Zebra is a technology and engineering oriented company, and Fetch is as well. With the best team, and the best robots, we just think there’s an outstanding opportunity that we haven’t necessarily found with other AMR companies.

What about for Fetch? Why is Zebra a good fit?

Melonee Wise: Over the last couple of years we have been slowly expanding the devices that we want to connect to, and the software ecosystems that we want to connect to, and Zebra has provided a lot of that synergy. We're constantly asked, can we get a robot to do something if we scan a barcode, or can we press a button on a tablet, and have a robot appear, things like that. Being able to deliver these kinds of end to end, fully encapsulated solutions that go beyond the robots and really solve the problems that customers are looking to solve—Zebra helps us do that.

And there's also an opportunity for us as a robotics startup to partner with a larger company to help us scale much more rapidly. That’s the other thing that’s really exciting for us—Zebra has a very strong business in warehousing and logistics. They’re an industry leader, and I think they can really help us get to the next level as a company.

Does that represent a transition for AMRs from just moving things from one place to another to integrating with all kinds of other warehouse systems?

Melonee Wise: For a decade or more, people have been talking about Industry 4.0 and how it's going to change the world and revolutionize manufacturing, but as a community we’ve struggled to execute on that goal for lots of reasons. We've had what people might call islands of automation: siloed pieces of automation that are doing their thing by themselves. But if they have to talk to each other, that's a bridge too far.

But in many ways automation technology is now getting mature enough through the things that we’ve seen in software for a long time, like APIs, interconnected services, and cloud platforms. Zebra has been working on that independently for a long time as part of their business, and so bringing our two businesses together to build these bridges between islands of automation is why it made sense for us to come together at this point in time.

If you go back far enough, Fetch has its origins in Willow Garage and ROS, and I know that Fetch still makes substantial software contributions back to the ROS community. Is that something you’ll be able to continue?

Melonee Wise: Our participation in the open source community is still very important, and I think it’s going to continue to be important. A lot of robotics is really about getting great talent, and open source is one way that we connect to that talent and participate in the larger ecosystem and draw value from it. There are also lots of great tools out there in the open source community that Fetch uses and contributes to. And I think those types of projects that are not core to our IP but give us value will definitely be things that we continue to participate in.

What will happen to the Fetch mobile manipulator that I know a lot of labs are currently using for research?

Melonee Wise: We're committed to continuing to support our existing customers and I think that there’s still a place for the research product going forward.

What do you think are the biggest challenges for AMRs right now?

Melonee Wise: One thing that I think is happening in the industry is that the safety standards are now coming into play. In December of last year the first official autonomous mobile robot safety standards were released, and not everyone was ready for that, but Fetch has been at the front of this for a long time. It took about four years to develop the AMR safety standard, and getting to an understanding of what safe actually means and how you implement those safety measures. It’s common for safety standards to lag behind technology, but customers have been asking more and more, “well how do I know that your robots are safe?” And so I think what we're going to see is that these safety standards are going to have differing effects on different companies, based on how thoughtful they've been about safety through the design and implementation of their technology,

What have you learned, or what has surprised you about your industry now that we’re a year and a half into the pandemic?

Melonee Wise: One of the more interesting things to me was that it was amazing how quickly the resistance to the cloud goes away when you have to deploy things remotely during a pandemic. Originally customers weren't that excited about the cloud and wanted to do everything on site, but once the pandemic hit they switched their point of view on the technology pretty quickly, which was nice to see.

Jim Lawton: The amount of interest that we've seen in robots and automation in general has skyrocketed over the last year. In particular we’re hearing from companies that are not well equipped to deal with their automation needs, and the pandemic has just made it so much more clear to them that they have to do something. I think we're going to see a renaissance within some of these spaces because of their investment in robotic technologies. Continue reading

Posted in Human Robots