Tag Archives: autonomous

#439929 GITAI’s Autonomous Robot Arm Finds ...

Late last year, Japanese robotics startup GITAI sent their S1 robotic arm up to the International Space Station as part of a commercial airlock extension module to test out some useful space-based autonomy. Everything moves pretty slowly on the ISS, so it wasn't until last month that NASA astronauts installed the S1 arm and GITAI was able to put the system through its paces—or rather, sit in comfy chairs on Earth and watch the arm do most of its tasks by itself, because that's the dream, right?

The good news is that everything went well, and the arm did everything GITAI was hoping it would do. So what's next for commercial autonomous robotics in space? GITAI's CEO tells us what they're working on.

In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).

One of the advantages of working in space is that it's a highly structured environment. Microgravity can be somewhat unpredictable, but you have a very good idea of the characteristics of objects (and even of lighting) because everything that's up there is excessively well defined. So, stuff like using a two-finger gripper for relatively high precision tasks is totally possible, because the variation that the system has to deal with is low. Of course, things can always go wrong, so GITAI also tested teleop procedures from Houston to make sure that having humans in the loop was also an effective way of completing tasks.

Since full autonomy is vastly more difficult than almost full autonomy, occasional teleop is probably going to be critical for space robots of all kinds. We spoke with GITAI CEO Sho Nakanose to learn more about their approach.

IEEE Spectrum: What do you think is the right amount of autonomy for robots working inside of the ISS?

Sho Nakanose: We believe that a combination of 95% autonomous control and 5% remote judgment and remote operation is the most efficient way to work. In this ISS demonstration, all the work was performed with 99% autonomous control and 1% remote decision making. However, in actual operations on the ISS, irregular tasks will occur that cannot be handled by autonomous control, and we believe that such irregular tasks should be handled by remote control from the ground, so we believe that the final ratio of about 5% remote judgment and remote control will be the most efficient.

GITAI will apply the general-purpose autonomous space robotics technology, know-how, and experience acquired through this tech demo to develop extra-vehicular robotics (EVR) that can execute docking, repair, and maintenance tasks for On-Orbit Servicing (OOS) or conduct various activities for lunar exploration and lunar base construction. -Sho Nakanose

I'm sure you did many tests with the system on the ground before sending it to the ISS. How was operating the robot on the ISS different from the testing you had done on Earth?

The biggest difference between experiments on the ground and on the ISS is the microgravity environment, but it was not that difficult to cope with. However, experiments on the ISS, which is an unknown environment that we have never been to before, are subject to a variety of unexpected situations that were extremely difficult to deal with, for example an unexpected communication breakdown occurred due to a failed thruster firing experiment on the Russian module. However, we were able to solve all the problems because the development team had carefully prepared for the irregularities in advance.

It looked like the robot was performing many tasks using equipment designed for humans. Do you think it would be better to design things like screws and control panels to make them easier for robots to see and operate?

Yes, I think so. Unlike the ISS that was built in the past, it is expected that humans and robots will cooperate to work together in the lunar orbiting space station Gateway and the lunar base that will be built in the future. Therefore, it is necessary to devise and implement an interface that is easy to use for both humans and robots. In 2019, GITAI received an order from JAXA to develop guidelines for an interface that is easy for both humans and robots to use on the ISS and Gateway.

What are you working on next?

We are planning to conduct an on-orbit extra-vehicular demonstration in 2023 and a lunar demonstration in 2025. We are also working on space robot development projects for several customers for which we have already received orders. Continue reading

Posted in Human Robots

#439826 Autonomous Racing Drones Dodge Through ...

It seems inevitable that sooner or later, the performance of autonomous drones will
surpass the performance of even the best human pilots. Usually things in robotics that seem inevitable happen later as opposed to sooner, but drone technology seems to be the exception to this. We've seen an astonishing amount of progress over the past few years, even to the extent of sophisticated autonomy making it into the hands of consumers at an affordable price.

The cutting edge of drone research right now is putting drones with relatively simple onboard sensing and computing in situations that require fast and highly aggressive maneuvers. In a paper
published yesterday in Science Robotics, roboticists from Davide Scaramuzza's Robotics and Perception Group at the University of Zurich along with partners at Intel demonstrate a small, self-contained, fully autonomous drone that can aggressively fly through complex environments at speeds of up to 40kph.

The trick here, to the extent that there's a trick, is that the drone performs a direct mapping of sensor input (from an Intel RealSense 435 stereo depth camera) to collision-free trajectories. Conventional obstacle avoidance involves first collecting sensor data; making a map based on that sensor data; and finally making a plan based on that map. This approach works perfectly fine as long as you're not concerned with getting all of that done quickly, but for a drone with limited onboard resources moving at high speed, it just takes too long. UZH's approach is instead to go straight from sensor input to trajectory output, which is much faster and allows the speed of the drone to increase substantially.

The convolutional network that performs this sensor-to-trajectory mapping was trained entirely in simulation, which is cheaper and easier but (I would have to guess) less fun than letting actual drones hammer themselves against obstacles over and over until they figure things out. A simulated “expert” drone pilot that has access to a 3D point cloud, perfect state estimation, and computation that's not constrained by real-time requirements trains its own end-to-end policy, which is of course not achievable in real life. But then, the simulated system that will be operating under real-life constraints just learns in simulation to match the expert as closely as possible, which is how you get that expert-level performance in a way that can be taken out of simulation and transferred to a real drone without any adaptation or fine-tuning.

The other big part of this is making that sim-to-real transition, which can be problematic because simulation doesn't always do a great job of simulating everything that happens in the world that can screw with a robot. But this method turns out to be very robust against motion blur, sensor noise, and other perception artifacts. The drone has successfully navigated through real world environments including snowy terrains, derailed trains, ruins, thick vegetation, and collapsed buildings.

“While humans require years to train, the AI, leveraging high-performance simulators, can reach comparable navigation abilities much faster, basically overnight.” -Antonio Loquercio, UZH

This is not to say that the performance here is flawless—the system still has trouble with very low illumination conditions (because the cameras simply can't see), as well as similar vision challenges like dust, fog, glare, and transparent or reflective surfaces. The training also didn't include dynamic obstacles, although the researchers tell us that moving things shouldn't be a problem even now as long as their speed relative to the drone is negligible. Many of these problems could potentially be mitigated by using
event cameras rather than traditional cameras, since faster sensors, especially ones tuned to detect motion, would be ideal for high speed drones.

The researchers tell us that their system does not (yet) surpass the performance of expert humans in these challenging environments:

Analyzing their performance indicates that humans have a very rich and detailed understanding of their surroundings and are capable of planning and executing plans that span far in the future (our approach plans only one second into the future). Both are capabilities that today's autonomous systems still lack. We see our work as a stepping stone towards faster autonomous flight that is enabled by directly predicting collision-free trajectories from high-dimensional (noisy) sensory input.

This is one of the things that is likely coming next, though—giving the drone the ability to learn and improve from real-world experience. Coupled with more capable sensors and always increasing computer power, pushing that flight envelope past 40 kph in complex environments seems like it's not just possible, but inevitable. Continue reading

Posted in Human Robots

#439568 Corvus Robotics’ Autonomous Drones ...

Warehouses offer all kinds of opportunities for robots. Semi-structured controlled environments, lots of repetitive tasks, and humans that would almost universally rather be somewhere else. Robots have been doing great at taking over jobs that involve moving stuff from one place to another, but there are all kinds of other things that have to happen to keep warehouses operating efficiently.

Corvus Robotics, a YC-backed startup that's just coming out of stealth, has decided that they want to go after warehouse inventory tracking. That is, making sure that a warehouse knows exactly what's inside of it and where. This is a more complicated task than it seems like it should be, and not just any robot is able to do it. Corvus' solution involves autonomous drones that can fly unattended for weeks on end, collecting inventory data without any human intervention at all.

Many warehouses have a dedicated team of humans whose job is to wander around the warehouse scanning stuff to maintain an up to date list of where everything is, a task which is both very important and very boring. As it turns out, autonomous drones can scan up to ten times faster than humans—Corvus Robotics' drones are able to inventory an entire warehouse on a rolling basis in just a couple days, while it would take a human team weeks to do the same task.

Inventory is a significant opportunity for robotics, and we've seen a bunch of different attempts at doing inventory in places like supermarkets, but warehouses are different. Warehouses can be huge, in every dimension, meaning that the kinds of robots that can make supermarket inventory work just won't cut it in a warehouse environment for the simple reason that they can't see inventory stacked on shelves all the way to the ceiling, which can be over 20m high. And this is why the drone form factor, while novel, actually offers a uniquely useful solution.
It's probably fair to think of a warehouse as a semi-structured environment, with emphasis on the “semi.” At the beginning of a deployment, Corvus will generate one map of the operating area that includes both geometric and semantic information. After that, the drones will autonomously update that map with each flight throughout their entire lifetimes. There are walls and ceilings that don't move, along with large shelving units that are mostly stationary, but those things aren't going to do your localization system any favors since they all look the same. And the stuff that does offer some uniqueness, like the items on those shelves, is changing all the time. “That's a huge problem for us,” says Mohammed Kabir, Corvus Robotics' CTO. “Being able to do place recognition at the granularity that we need while everything is changing is really hard.” If you were looking closely at the video, you may have spotted some fiducials (optical patterns placed in the environment that vision systems find easy to spot), but we're told that the video was shot in Corvus Robotics' development warehouse where those markers are used for ground truth testing.
In real deployments, fiducials (or anything else) isn't necessary. The drone has its charging dock, and the initial map, but otherwise it's doing onboard visual-inertial SLAM (simultaneous localization and mapping), dense volumetric mapping, and motion planning with its 10 camera array and an autonomy stack running on ROS and PX4 for real time flight control. Corvus isn't willing to let us in on all of their secrets, but they did tell us that they incorporate some of the structured components of the environment into their SLAM solution, as well as some things are semi-static—that is, things that are unlikely to change over the duration of a single flight, helping the drone with loop closure.
One of the big parts of being able to do this is the ability to localize in very large, unstructured environments where things are constantly changing without having to rely on external infrastructure. For example, a WiFi connection back to our base station is not guaranteed, so everything needs to run on-board the drone, which is a non-trivial task. It's essentially all of the compute of a self-driving car, compressed into the drone. -Mohammed KabirCorvus is able to scan between 200 and 400 pallet positions per hour per drone, inclusive of recharge time. At ground level, this is probably about equivalent in speed to a human (although more sustainable). But as you start looking at inventory higher off the ground, the drone maintains a constant scan rate, while for a human, it gets exponentially harder, involving things like strapping yourself to a forklift. And of course the majority of the items in a high warehouse are not at ground level, because ground level only covers a tier or two of a space that may soar to 20 meters. Overall, Corvus says that they can do inventory up to 10x faster than a human.
With a few exceptions, it's unlikely that most warehouses are going to be able to go human-free in the foreseeable future, meaning that any time you talk about robot autonomy, you also have to talk about safety. “We can operate when no one's around, so our customers often schedule the drones during the third shift when the warehouse is dark,” says Mohammed Kabir. “There are also customers who want us to operate around people, which initially terrified us, because interacting with humans can be quite tricky. But over the last couple years, we've built safety systems to be able to deal with that.” In addition to the collision avoidance that comes with the 360 degree vision system that the drone uses to navigate, it has a variety of safety-first behaviors all the way up to searching for clear flat spots to land in the event of an emergency. But it sounds like the primary way that Corvus tries to maintain safety is by keeping drones and humans as separate as possible, which may involve process changes for the warehouse, explains Corvus Robotics CEO Jackie Wu. “If you see a drone in an aisle, just don't go in until it's done.”
We also asked Wu about what exactly he means when he calls the Corvus Robotics' drone “fully autonomous,” because depending on who you ask (and what kind of robot and task you're talking about), full autonomy can mean a lot of different things.
For us, full autonomy means continuous end to end operation with no human in the loop within a certain scenario or environment. Obviously, it's not level five autonomy, because nobody is doing level five, which would take some kind of generalized intelligence that can fly anywhere. But, for level four, for the warehouse interior, the drones fly on scheduled missions, intelligently find objects of interest while avoiding collisions, come back to land, recharge and share that data, all without anybody touching them. And we're able to do this repeatedly, without external localization infrastructure. -Jackie WuAs tempting as it is, we're not going to get into the weeds here about what exactly constitutes “full autonomy” in the context of drones. Well, okay, maybe we'll get into the weeds a little bit, just to say that being able to repeatedly do a useful task end-to-end without a human in the loop seems close enough to whatever your definition of full autonomy is that it's probably a fair term to apply here. Are there other drones that are arguably more autonomous, in the sense that they require even less structure in the environment? Sure. Are those same drones arguably less autonomous because they don't autonomously recharge? Probably. Corvus Robotics' perspective that the ability to run a drone autonomously for weeks at a time is a more important component of autonomy is perfectly valid considering their use case, but I think we're at the point where “full autonomy” at this level is becoming domain-specific enough to make direct comparisons difficult and maybe not all that useful.
Corvus has just recently come out of stealth, and they're currently working on pilot projects with a handful of Global 2000 companies. Continue reading

Posted in Human Robots

#439532 Lethal Autonomous Weapons Exist; They ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya's Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”
In so many words, the red line of autonomous targeting of humans has now been crossed.
To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever.
The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it's a Slaughterbot.
The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system.

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.
A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”
The “Slaughterbots” Question
In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre's Dec. 2017 IEEE Spectrum article “Why You Shouldn't Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.
We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video's epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.
The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.
“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”
While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.
Azerbaijan's decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan's success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles.
If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.
We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn't. Continue reading

Posted in Human Robots

#439380 Autonomous excavators ready for around ...

Researchers from Baidu Research Robotics and Auto-Driving Lab (RAL) and the University of Maryland, College Park, have introduced an autonomous excavator system (AES) that can perform material loading tasks for a long duration without any human intervention while offering performance closely equivalent to that of an experienced human operator. Continue reading

Posted in Human Robots