Tag Archives: robots

#439589 Tiny ‘maniac’ robots could ...

Would you let a tiny MANiAC travel around your nervous system to treat you with drugs? You may be inclined to say no, but in the future, “magnetically aligned nanorods in alginate capsules” (MANiACs) may be part of an advanced arsenal of drug delivery technologies at doctors' disposal. A recent study in Frontiers in Robotics and AI is the first to investigate how such tiny robots might perform as drug delivery vehicles in neural tissue. The study finds that when controlled using a magnetic field, the tiny tumbling soft robots can move against fluid flow, climb slopes and move about neural tissues, such as the spinal cord, and deposit substances at precise locations. Continue reading

Posted in Human Robots

#439564 Video Friday: NASA Sending Robots to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers.

It’s ICRA this week, but since the full proceedings are not yet available, we’re going to wait until we can access everything to cover the conference properly. Or, as properly as we can not being in Xi’an right now.

We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

NASA has selected the DAVINCI+ (Deep Atmosphere Venus Investigation of Noble-gases, Chemistry and Imaging +) mission as part of its Discovery program, and it will be the first spacecraft to enter the Venus atmosphere since NASA’s Pioneer Venus in 1978 and USSR’s Vega in 1985.

The mission, Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus, will consist of a spacecraft and a probe. The spacecraft will track motions of the clouds and map surface composition by measuring heat emission from Venus’ surface that escapes to space through the massive atmosphere. The probe will descend through the atmosphere, sampling its chemistry as well as the temperature, pressure, and winds. The probe will also take the first high-resolution images of Alpha Regio, an ancient highland twice the size of Texas with rugged mountains, looking for evidence that past crustal water influenced surface materials.

Launch is targeted for FY2030.

[ NASA ]

Skydio has officially launched their 3D Scan software, turning our favorite fully autonomous drone into a reality capture system.

Skydio held a launch event at the U.S. Space & Rocket Center and the keynote is online; it's actually a fairly interesting 20 minutes with some cool rockets thrown in for good measure.

[ Skydio ]

Space robotics is a key technology for space exploration and an enabling factor for future missions, both scientific and commercial. Underwater tests are a valuable tool for validating robotic technologies for space. In DFKI’s test basin, even large robots can be tested in simulated micro-gravity with mostly unrestricted range of motion.

[ DFKI ]

The Harvard Microrobotics Lab has developed a soft robotic hand with dexterous soft fingers capable of some impressive in-hand manipulation, starting (obviously) with a head of broccoli.

Training soft robots in simulation has been a bit of a challenge, but the researchers developed their own simulation framework that matches the real world pretty closely:

The simulation framework is avilable to download and use, and you can do some nutty things with it, like simulating tentacle basketball:

I’d pay to watch that IRL.

[ Paper ] via [ Harvard ]

Using the navigation cameras on its mast, NASA’s Curiosity Mars rover this movie of clouds just after sunset on March 28, 2021, the 3,072nd so, or Martian day, of the mission. These noctilucent, or twilight clouds, are made of water ice; ice crystals reflect the setting sun, allowing the detail in each cloud to be seen more easily.

[ JPL ]

Genesis Robotics is working on something, and that's all we know.

[ Genesis Robotics ]

To further improve the autonomous capabilities of future space robots and to advance European efforts in this field, the European Union funded the ADE project, which was completed recently in Wulsbüttel near Bremen. There, the rover “SherpaTT” of the German Research Center for Artificial Intelligence (DFKI) managed to autonomously cover a distance of 500 meters in less than three hours thanks to the successful collaboration of 14 European partners.

[ DFKI ]

For $6.50, a NEXTAGE robot will make an optimized coffee for you. In Japan, of course.

[ Impress ]

Things I’m glad a robot is doing so that I don’t have to: dross skimming.

[ Fanuc ]

Today, anyone can hail a ride to experience the Waymo Driver with our fully autonomous ride-hailing service, Waymo One. Riders Ben and Ida share their experience on one of their recent multi-stop rides. Watch as they take us along for a ride.

[ Waymo ]

The IEEE Robotics and Automation Society Town Hall 2021 featured discussion around Diversity & Inclusion, RAS CARES committee & Code of Conduct, Gender Diversity, and the Developing Country Faculty Engagement Program.

[ IEEE RAS ] Continue reading

Posted in Human Robots

#439543 How Robots Helped Out After the Surfside ...

Editor's Note: Along with Robin Murphy, the authors of this article include David Merrick, Justin Adams, Jarrett Broder, Austin Bush, Laura Hart, and Rayne Hawkins. This team is with the Florida State University's Disaster Incident Response Team, which was in Surfside for 24 days at the request of Florida US&R Task 1 (Miami Dade Fire Rescue Department).

On June 24, 2021, at 1:25AM portions of the 12 story Champlain Towers South condominium in Surfside, Florida collapsed, killing 98 people and injuring 11, making it the third largest fatal collapse in US history. The life-saving and mitigation Response Phase, the phase where responders from local, state, and federal agencies searched for survivors, spanned June 24 to July 7, 2021. This article summarizes what is known about the use of robots at Champlain Towers South, and offers insights into challenges for unmanned systems.

Small unmanned aerial systems (drones) were used immediately upon arrival by the Miami Dade Fire Rescue (MDFR) Department to survey the roughly 2.68 acre affected area. Drones, such as the DJI Mavic Enterprise Dual with a spotlight payload and thermal imaging, flew in the dark to determine the scope of the collapse and search for survivors. Regional and state emergency management drone teams were requested later that day to supplement the effort of flying day and night for tactical life-saving operations and to add flights for strategic operations to support managing the overall response.

View of a Phantom 4 Pro in use for mapping the collapse on July 2, 2021. Two other drones were also in the airspace conducting other missions but not visible. Photo: Robin R. Murphy
The teams brought at least 9 models of rotorcraft drones, including the DJI Mavic 2 Enterprise Dual, Mavic 2 Enterprise Advanced, DJI Mavic 2 Zoom, DJI Mavic Mini, DJI Phantom 4 Pro, DJI Matrice 210, Autel Dragonfish, and Autel EVO II Pro plus a tethered Fotokite drone. The picture above shows a DJI Phantom 4 Pro in use, with one of the multiple cranes in use on the site visible. The number of flights for tactical operations were not recorded, but drones were flown for 304 missions for strategic operations alone, making the Surfside collapse the largest and longest use of drones recorded for a disaster, exceeding the records set by Hurricane Harvey (112) and Hurricane Florence (260).

Unmanned ground bomb squad robots were reportedly used on at least two occasions in the standing portion of the structure during the response, once to investigate and document the garage and once on July 9 to hold a repeater for a drone flying in the standing portion of the garage. Note that details about the ground robots are not yet available and there may have been more missions, though not on the order of magnitude of the drone use. Bomb squad robots tend to be too large for use in areas other than the standing portions of the collapse.

We concentrate on the use of the drones for tactical and strategic operations, as the authors were directly involved in those operations. It offers a preliminary analysis of the lessons learned. The full details of the response will not be available for many months due to the nature of an active investigation into the causes of the collapse and due to privacy of the victims and their families.
Drone Use for Tactical Operations
Tactical operations were carried out primarily by MDFR with other drone teams supporting when necessary to meet the workload. Drones were first used by the MDFR drone team, which arrived within minutes of the collapse as part of the escalating calls. The drone effort started with night operations for direct life-saving and mitigation activities. Small DJI Mavic 2 Enterprise Dual drones with thermal camera and spotlight payloads were used for general situation awareness to help responders understand the extent of the collapse beyond what could be seen from the street side. The built-in thermal imager was used but did not have the resolution and was unable to show details as much of the material was the same temperature and heat emissions were fuzzy. The spotlight with the standard visible light camera was more effective, though the view was constricted. The drones were also used to look for survivors or trapped victims, help determine safety hazards to responders, and provide task force leaders with overwatch of the responders. During daylight, DJI Mavic Zoom drones were added because of their higher camera resolution zoom. When fires started in the rubble, drones with a streaming connection to bucket truck operators were used to help optimize position of water. Drones were also used to locate civilians entering the restricted area or flying drones to taking pictures.

In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors.

As the response evolved, the use of drones was expanded to missions where the drones would fly in close proximity to structures and objects, fly indoors, and physically interact with the environment. For example, drones were used to read license plates to help identify residents, search for pets, and document belongings inside parts of the standing structure for families. In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors. Before the demolition of the standing portion of the tower, MDFR used a drone to remove an American flag that had been placed on the structure during the initial search.

Drone Use for Strategic Operations

An orthomosiac of the collapse constructed from imagery collected by a drone on July 1, 2021.
Strategic operations were carried out by the Disaster Incident Research Team (DIRT) from the Florida State University Center for Disaster Risk Policy. The DIRT team is a state of Florida asset and was requested by Florida Task Force 1 when it was activated to assist later on June 24. FSU supported tactical operations but was solely responsible for collecting and processing imagery for use in managing the response. This data was primarily orthomosiac maps (a single high resolution image of the collapse created from stitching together individual high resolution imagers, as in the image above) and digital elevation maps (created from structure from motion, below).

Digital elevation map constructed from imagery collected by a drone on 27 June, 2021.Photo: Robin R. Murphy
These maps were collected every two to four hours during daylight, with FSU flying an average of 15.75 missions per day for the first two weeks of the response. The latest orthomosaic maps were downloaded at the start of a shift by the tactical responders for use as base maps on their mobile devices. In addition, a 3D reconstruction of the state of the collapse on July 4 was flown the afternoon before the standing portion was demolished, shown below.

GeoCam 3D reconstruction of the collapse on July 4, 2021. Photo: Robin R. Murphy
The mapping functions are notable because they require specialized software for data collection and post-processing, plus the speed of post-processing software relied on wireless connectivity. In order to stitch and fuse images without gaps or major misalignments, dedicated software packages are used to generate flight paths and autonomously fly and trigger image capture with sufficient coverage of the collapse and overlap between images.

Coordination of Drones on Site
The aerial assets were loosely coordinated through social media. All drones teams and Federal Aviation Administration (FAA) officials shared a WhatsApp group chat managed by MDFR. WhatsApp offered ease of use, compatibility with everyone's smartphones and mobile devices, and ease of adding pilots. Ease of adding pilots was important because many were not from MDFR and thus would not be in any personnel-oriented coordination system. The pilots did not have physical meetings or briefings as a whole, though the tactical and strategic operations teams did share a common space (nicknamed “Drone Zone”) while the National Institute of Standards and Technology teams worked from a separate staging location. If a pilot was approved by MDFR drone captain who served as the “air boss,” they were invited to the WhatsApp group chat and could then begin flying immediately without physically meeting the other pilots.

The teams flew concurrently and independently without rigid, pre-specified altitude or area restrictions. One team would post that they were taking off to fly at what area of the collapse and at what altitude and then post when they landed. The easiest solution was for the pilots to be aware of each others' drones and adjust their missions, pause, or temporarily defer flights. If a pilot forgot to post, someone would send a teasing chat eliciting a rapid apology.
Incursions by civilian manned and unmanned aircraft in the restricted airspace did occur. If FAA observers or other pilots saw a drone flying that was not accounted for in the chat, i.e., that five drones were visible over the area but only four were posted, or if a drone pilot saw a drone in an unexpected area, they would post a query asking if someone had forgotten to post or update a flight. If the drone remained unaccounted for, the FAA would assume that a civilian drone had violated the temporary flight restrictions and search the surrounding area for the offending pilot.
Preliminary Lessons LearnedWhile the drone data and performance is still being analyzed, some lessons learned have emerged that may be of value to the robotics, AI, and engineering communities.
Tactical and strategic operations during the response phase favored small, inexpensive, easy to carry platforms with cameras supporting coarse structure from motion rather than larger, more expensive lidar systems. The added accuracy of lidar systems was not needed for those missions, though the greater accuracy and resolution of such systems were valuable for the forensic structural analysis. For tactical and strategic operations, the benefits of lidar was not worth the capital costs and logistical burden. Indeed, general purpose consumer/prosumer drones that could fly day or night, indoors and outdoors, and for both mapping and first person view missions were highly preferred over specialized drones. The reliability of a drone was another major factor in choosing a specific model to field, again favoring consumer/prosumer drones as they typically have hundreds of thousand hours of flight time more than specialized or novel drones. Tethered drones offer some advantages for overwatch but many tactical operations missions require a great deal of mobility. Strategic mapping necessitates flying directly over the entire area being mapped.

While small, inexpensive general purpose drones offered many advantages, they could be further improved for flying at night and indoors. A wider area of lighting would be helpful. A 360 degree (spherical) area of coverage for obstacle avoidance for working indoors or at low altitudes and close proximity to irregular work envelopes and near people, especially as night, would also be useful. Systems such as the Flyability ELIOS 2 are designed to fly in narrow and highly cluttered indoor areas, but no models were available for the immediate response. Drone camera systems need to be able to look straight up to inspect the underside of structures or ceilings. Mechanisms for determining the accurate GPS location of a pixel in an image, not just the GPS location of the drone, is becoming increasing desirable.
Other technologies could be of benefit to the enterprise but face challenges. Computer vision/machine learning (CV/ML) for searching for victims in rubble is often mentioned as a possible goal, but a search for victims who are not on the surface of the collapse is not visually directed. The portions of victims that are not covered by rubble are usually camouflaged with gray dust, so searches tend to favor canines using scent. Another challenge for CV/ML methods is the lack of access to training data. Privacy and ethical concerns poses barriers to the research community gaining access to imagery with victims in the rubble, but simulations may not have sufficient fidelity.
The collapse supplies motivation for how informatics research and human-computer interaction and human-robot interaction can contribute to the effective use of robots during a disaster, and illustrates that a response does not follow a strictly centralized, hierarchical command structure and the agencies and members of the response are not known in advance. Proposed systems must be flexible, robust, and easy to use. Furthermore, it is not clear that responders will accept a totally new software app versus making do with a general purpose app such as WhatsApp that the majority routinely use for other purposes.
The biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns.
However, the biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns. There remains much to work to be done by researchers, manufacturers, and emergency management to make these critical technologies more useful for extreme environments. Our current work is focusing on creating open source datasets and documentation and conducting a more thorough analysis to accelerate the process.

Value of Drones The pervasive use of the drones indicates their implicit value to responding to, and documenting, the disaster. It is difficult to quantify the impact of drones, similar to the difficulties in quantifying the impact of a fire truck on firefighting or the use of mobile devices in general. Simply put, drones would not have been used beyond a few flights if they were not valuable.
The impact of the drones on tactical operations was immediate, as upon arrival MDFR flew drones to assess the extent of the collapse. Lighting on fire trucks primarily illuminated the street side of the standing portion of the building, while the drones, unrestricted by streets or debris, quickly expanded situation awareness of the disaster. The drones were used optimize placement of water to suppress the fires in the debris. The impact of the use of drones for other tactical activities is harder to quantify, but the frequent flights and pilots remaining on stand-by 24/7 indicate their value.
The impact of the drones on strategic operations was also considerable. The data collected by the drones and then processed into 2D maps and 3D models became a critical part of the US&R operations as well as one part of the nascent investigation into why the building failed. During initial operations, DIRT provided 2D maps to the US&R teams four times per day. These maps became the base layers for the mobile apps used on the pile to mark the locations of human remains, structural members of the building, personal effects, or other identifiable information. Updated orthophotos were critical to the accuracy of these reports. These apps running on mobile devices suffered from GPS accuracy issues, often with errors as high as ten meters. By having base imagery that was only hours old, mobile app users where able to 'drag the pin' on the mobile app to a more accurate report location on the pile – all by visualizing where they were standing compared to fresh UAS imagery. Without this capability, none of the GPS field data would be of use to US&R or investigators looking at why the structural collapse occurred. In addition to serving a base layer on mobile applications, the updated map imagery was used in all tactical, operational, and strategic dashboards by the individual US&R teams as well as the FEMA US&R Incident Support Team (IST) on site to assist in the management of the incident.
Aside from the 2D maps and orthophotos, 3D models were created from the drone data and used by structural experts to plan operations, including identifying areas with high probabilities of finding survivors or victims. Three-dimensional data created through post-processing also supported the demand for up-to-date volumetric estimates – how much material was being removed from the pile, and how much remained. These metrics provided clear indications of progress throughout the operations.
Acknowledgments Portions of this work were supported by NSF grants IIS-1945105 and CMMI- 2140451. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
The authors express their sincere condolences to the families of the victims. Continue reading

Posted in Human Robots

#439527 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
Photos: EASE

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439509 What’s Going on With Amazon’s ...

Amazon’s innovation blog recently published a post entitled “New technologies to improve Amazon employee safety,” which highlighted four different robotic systems that Amazon’s Robotics and Advanced Technology teams have been working on. Three of these robotic systems are mobile robots, which have been making huge contributions to the warehouse space over the past decade. Amazon in particular was one of the first (if not the first) e-commerce companies to really understand the fundamental power of robots in warehouses, with their $775 million acquisition of Kiva Systems’ pod-transporting robots back in 2012.

Since then, a bunch of other robotics companies have started commercially deploying robots in warehouses, and over the past five years or so, we’ve seen some of those robots develop enough autonomy and intelligence to be able to operate outside of restricted, highly structured environments and work directly with humans. Autonomous mobile robots for warehouses is now a highly competitive sector, with companies like Fetch Robotics, Locus Robotics, and OTTO Motors all offering systems that can zip payloads around busy warehouse floors safely and efficiently.

But if we’re to take the capabilities of the robots that Amazon showcased over the weekend at face value, the company appears to be substantially behind the curve on warehouse robots.

Let’s take a look at the three mobile robots that Amazon describes in their blog post:

“Bert” is one of Amazon’s first Autonomous Mobile Robots, or AMRs. Historically, it’s been difficult to incorporate robotics into areas of our facilities where people and robots are working in the same physical space. AMRs like Bert, which is being tested to autonomously navigate through our facilities with Amazon-developed advanced safety, perception, and navigation technology, could change that. With Bert, robots no longer need to be confined to restricted areas. This means that in the future, an employee could summon Bert to carry items across a facility. In addition, Bert might at some point be able to move larger, heavier items or carts that are used to transport multiple packages through our facilities. By taking those movements on, Bert could help lessen strain on employees.

This all sounds fairly impressive, but only if you’ve been checked out of the AMR space for the last few years. Amazon is presenting Bert as part of the “new technologies” they’re developing, and while that may be the case, as far as we can make out these are very much technologies that seem to be new mostly just to Amazon and not really to anyone else. There are any number of other companies who are selling mobile robot tech that looks to be significantly beyond what we’re seeing here—tech that (unless we’re missing something) has already largely solved many of the same technical problems that Amazon is working on.

We spoke with mobile robot experts from three different robotics companies, none of whom were comfortable going on record (for obvious reasons), but they all agreed that what Amazon is demonstrating in these videos appears to be 2+ years behind the state of the art in commercial mobile robots.

We’re obviously seeing a work in progress with Bert, but I’d be less confused if we were looking at a deployed system, because at least then you could make the argument that Amazon has managed to get something operational at (some) scale, which is much more difficult than a demo or pilot project. But the slow speed, the careful turns, the human chaperones—other AMR companies are way past this stage.

Kermit is an AGC (Autonomously Guided Cart) that is focused on moving empty totes from one location to another within our facilities so we can get empty totes back to the starting line. Kermit follows strategically placed magnetic tape to guide its navigation and uses tags placed along the way to determine if it should speed up, slow down, or modify its course in some way. Kermit is further along in development, currently being tested in several sites across the U.S., and will be introduced in at least a dozen more sites across North America this year.

Most folks in the mobile robots industry would hesitate to call Kermit an autonomous robot at all, which is likely why Amazon doesn’t refer to it as such, instead calling it a “guided cart.” As far as I know, pretty much every other mobile robotics company has done away with stuff like magnetic tape in favor of map-based natural-feature localization (a technology that has been commercially available for years), because then your robots can go anywhere in a mapped warehouse, not just on these predefined paths. Even if you have a space and workflow that never ever changes, busy warehouses have paths that get blocked for one reason or another all the time, and modern AMRs are flexible enough to plan around those paths to complete their tasks. With these autonomous carts that are locked to their tapes, they can’t even move over a couple of feet to get around an obstacle.

I have no idea why this monstrous system called Scooter is the best solution for moving carts around a warehouse. It just seems needlessly huge and complicated, especially since we know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

Why is Amazon at “possibilities” when other companies are at commercial deployments?

I honestly just don’t understand what’s happening here. Amazon has (I assume) a huge R&D budget at its disposal. It was investing in robotic technology for e-commerce warehouses super early, and at an unmatched scale. Even beyond Kiva, Amazon obviously understood the importance of AMRs several years ago, with its $100+ million acquisition of Canvas Technology in 2019. But looking back at Canvas’ old videos, it seems like Canvas was doing in 2017 more or less what we’re seeing Amazon’s Bert robot doing now, nearly half a decade later.

We reached out to Amazon Robotics for comment and sent them a series of questions about the robots in these videos. They sent us this response:

The health and safety of our employees is our number one priority—and has been since day one. We’re excited about the possibilities robotics and other technology can play in helping to improve employee safety.

Hmm.

I mean, sure, I’m excited about the same thing, but I’m still stuck on why Amazon is at possibilities, while other companies are at commercial deployments. It’s certainly possible that the sheer Amazon-ness of Amazon is a significant factor here, in the sense that a commercial deployment for Amazon is orders of magnitude larger and more complex than any of the AMR companies that we’re comparing them to are dealing with. And if Amazon can figure out how to make (say) an AMR without using lidar, it would make a much more significant difference for an in-house large-scale deployment relative to companies offering AMRs as a service.

For another take on what might be going on with this announcement from Amazon, we spoke with Matt Beane, who got his PhD at MIT and studies robotics at UCSB’s Technology Management Program. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI) last year, Beane published a paper on the value of robots as social signals—that is, organizations get valuable outcomes from just announcing they have robots, because this encourages key audiences to see the organization in favorable ways. “My research strongly suggests that Amazon is reaping signaling value from this announcement,” Beane told us. There’s nothing inherently wrong with signaling, because robots can create instrumental value, and that value needs to be communicated to the people who will, ideally, benefit from it. But you have to be careful: “My paper also suggests this can be a risky move,” explains Beane. “Blowback can be pretty nasty if the systems aren’t in full-tilt, high-value use. In other words, it works only if the signal pretty closely matches the internal reality.”

There’s no way for us to know what the internal reality at Amazon is. All we have to go on is this blog post, which isn’t much, and we should reiterate that there may be a significant gap between what the post is showing us about Amazon’s mobile robots and what’s actually going on at Amazon Robotics. My hope is what we’re seeing here is primarily a sign that Amazon Robotics is starting to scale things up, and that we’re about to see them get a lot more serious about developing robots that will help make their warehouses less tedious, safer, and more productive. Continue reading

Posted in Human Robots