Category Archives: Human Robots
#439509 What’s Going on With Amazon’s ...
Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.
Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.
But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.
Let’s take a look at the three mobile robots that Amazon describes in their blog post:
“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.
This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.
We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.
We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.
Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.
Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.
I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.
Why is Amazon at “possibilities” when other companies are at commercial deployments?
I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.
We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:
The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.
Hmm.
I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.
For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”
There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading
#439505 Video Friday: Household Skills
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
Let us know if you have suggestions for next week, and enjoy today's videos.
Toyota Research Institute (TRI) unveiled new robotics capabilities aimed at solving complex tasks in home environments. Specifically, TRI roboticists were able to train robots to understand and operate in complicated situations that confuse most other robots, including recognizing and responding to transparent and reflective surfaces in a variety of circumstances.
[ TRI ]
The FAA now requires all recreational drone pilots to complete an online test, and this video from Pilot Institute explains what the deal is.
Pilot Institute also offers the official test on their website at the link below.
[ Pilot Institute ]
Thanks, Greg!
Hyundai's acquisition of Boston Dynamics is now complete, so they put out this weird video to celebrate.
I am mildly concerned that some of the robots in this video are CGI. It always bugs me when CGI robots are shown doing what the actual robot can do, because why would you do that?
[ Hyundai ]
Making a gripper that can pick flat things up off of a flat surface is tricky, but here's an innovative design that makes it work.
[ Paper ] via [ HMI Lab ]
Thanks, Fan!
Well, this is one of the most ambitious concepts I've seen in a while: Using massive drones to help launch rockets.
Rammaxx’s RAD concept is a powerful octocopter designed for vertical flight via a streamlined hull and guidance fins. It is projected to be able to accelerate with a rocket to around ~ 300mph / 500kph up to an altitude of ~ 15,000ft / 5,000m. We envision one RAD carrying one or two small rockets for small payloads, e.g. micro satellites, and a swarm of RADs working together to carry a rocket designed for larger payloads.
[ Rammaxx ] via [ PetaPixel ]
Deep Robotics’ Jueying quadruped has your coffee, conveniently waiting for you on the ground.
[ Deep Robotics ]
Chao Cao, from CMU's SubT team, talks about autonomous exploration in complex, three-dimensional (3D) environments. A paper on this will be presented at RSS 2021 next month.
[ Paper ] via [ CMU ]
Thanks, Fan!
3D printing in carbon steel with a robot arm.
[ USC Viterbi ]
The VoloDrone is here to change the way we move things. The heavy-lift drone is equipped to carry a payload of up to 200 kilograms; and with its 40 km range, it can fly within a large radius from the take-off point.
[ Volocopter ]
A video on decentralized trajectory planning for multicopter swarms with some lovely visualizations.
[ Paper ] via [ FAST Lab ]
Thanks, Fan!
It's all coming together (Cozmo 2.0, that is)! Share in our excitement when you watch one of our technicians show off how easy it is to reassemble Cozmo 2.0 with its new battery compartment.
[ DDL ]
We introduce a multi-functional robotic gripper equipped with a set of actions required for disassembly of electromechanical devices. The system enables manipulation in 7 degrees of freedom (DoF) and offers the ability to reposition objects in hand and to perform tasks that usually require bimanual systems.
[ Paper ]
Automated test procedure for carrying out a stress test of an airplane seat folding table performed with a KUKA IIWA robot. The test was performed for 50,000 cycles and contributed to the improvement of the original design in several aspects.
[ PRISMA Lab ]
This introduces Bruce, the CSIRO Dynamic Hexapod Robot capable of autonomous, dynamic locomotion over difficult terrain. This robot is built around Apptronik linear series elastic actuators, and went from design to deployment in under a year by using approximately 80 percent 3D-printed structural (joints and link) parts. The robot is designed to move at up to 1.0 m/s on flat ground with appropriate control, and was deployed into the the DARPA SubT Challenge Tunnel circuit event in August 2019.
[ Paper ] via [ CSIRO Data61 ]
In this paper, we present a method for grasp planning and object manipulation that enables the world’s first autonomous assembly of a large-scale stone wall with an unmanned hydraulic excavator system.
[ Paper ] via [ RSL ]
Discover MACBA, the museum of contemporary and modern art of Barcelona with a kind help from Pepper!
[ SoftBank ]
On April 19, 2021, NASA made history with the deployment on Mars of Ingenuity, the first powered aircraft conceived by humans to fly on another planet. With four flights to date—from its initial brief foray at three meters elevation to its longer subsequent flights covering up to a football field’s distance at velocities of about two meters per second—Ingenuity has opened a new world to planetary flight and discovery. In this colloquium, Teddy Tzanetos, JPL’s assembly, test, operations lead and ground support designer will present the project’s inception, its operational goals and capabilities, and what its success may mean for space exploration.
[ IFRR ]
Advances in robotics and automation offer new solutions to humanity’s oldest problems of clean water, food and shelter. The 2021 ICRA Industrial Forum focused on the challenges in today’s construction industry, with potential new solutions coming out of research labs around the world.
[ RAS ] Continue reading
#439499 Why Robots Can’t Be Counted On to Find ...
On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.
It seems like robots should be ready to help with something like this. But they aren’t.
A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
JOE RAEDLE/GETTY IMAGES
The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.
What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.
To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.
According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.
Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.
And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”
Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.
Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”
The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:
The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.
“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”
Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
Photo: Center for Robot-Assisted Search and Rescue
The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”
Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading
#439495 Legged Robots Do Surprisingly Well in ...
Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.
In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.
Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean. Sort of like pronk, except in space. Let’s make it so!
Just look at that robot bok!
This reorientation technique was developed using deep reinforcement learning, and then transferred from simulation to a real SpaceBok robot, albeit in two degrees of freedom rather than three. The real challenge with this method is just how complicated things get when you start wiggling multiple limbs in the air trying to get to a specific configuration, since the dynamics here are (as the paper puts it) “highly non-linear,” and it proved somewhat difficult to even simulate everything well enough. What you see in the simulation, incidentally, is an environment similar to Ceres, the largest asteroid in the asteroid belt, which has a surface gravity of 0.03g.
Although SpaceBok has “space” right in the name, it’s not especially optimized for this particular kind of motion. As the video shows, having an actuated hip joint could make the difference between a reliable soft landing and, uh, not. Not landing softly is a big deal, because an uncontrolled bounce could send the robot flying huge distances, which is what happened to the Philae lander on comet 67P/Churyumov–Gerasimenko back in 2014.
For more details on SpaceBok’s space booking, we spoke with the paper’s first author, Nikita Rudin, via email.
IEEE Spectrum: Why are legs ideal for mobility in low gravity environments?
Rudin: In low gravity environments, rolling on wheels becomes more difficult because of reduced traction. However, legs can exploit the low gravity and use high jumps to move efficiently. With high jumps, you can also clear large obstacles along the way, which is harder to do in higher gravity.
Were there unique challenges to training your controller in 2D and 3D relative to training controllers for terrestrial legged robot motion?
The main challenge is the long flight phase, which is not present in terrestrial locomotion. In earth gravity, robots (and animals) use reaction forces from the ground to balance. During a jump, they don't usually need to re-orient themselves. In the case of low gravity, we have extended flight phases (multiple seconds) and only short contacts with the ground. The robot needs to be able to re-orient / balance in the air. Otherwise, a small disturbance at the moment of the jump will slowly flip the robot. In short, in low gravity, there is a new control problem that can be neglected on Earth.
Besides the addition of a hip joint, what other modifications would you like to make to the robot to enhance its capabilities? Would a tail be useful, for example? Or very heavy shoes?
A tail is a very interesting idea and heavy shoes would definitely help, however, they increase the total weight, which is costly in space. We actually add some minor weight to feet already (in the paper we analyze the effect of these weights). Another interesting addition would be a joint in the center of the robot allowing it to do cat-like backbone torsion.
How does the difficulty of this problem change as the gravity changes?
With changing gravity you change the importance of mid-air re-orientation compared to ground contacts. For locomotion, low-gravity is harder from the reasoning above. However, if the robot is dropped and needs to perform a flip before landing, higher gravity is harder because you have less time for the whole process.
What are you working on next?
We have a few ideas for the next projects including a legged robot specifically designed and certified for space and exploring cat-like re-orientation on earth with smaller/faster robots. We would also like to simulate a zero-g environment on earth by dropping the robot from a few dozens of meters into a safety net, and of course, a parabolic flight is still very much one of our objectives. However, we will probably need a smaller robot there as well.
Cat-Like Jumping and Landing of Legged Robots in Low Gravity Using Deep Reinforcement Learning, by Nikita Rudin, Hendrik Kolvenbach, Vassilios Tsounis, and Marco Hutter from ETH Zurich, is published in IEEE Transactions on Robotics. Continue reading