Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439532 Lethal Autonomous Weapons Exist; They ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya's Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”
In so many words, the red line of autonomous targeting of humans has now been crossed.
To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever.
The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it's a Slaughterbot.
The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system.

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.
A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”
The “Slaughterbots” Question
In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre's Dec. 2017 IEEE Spectrum article “Why You Shouldn't Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.
We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video's epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.
The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.
“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”
While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.
Azerbaijan's decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan's success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles.
If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.
We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn't. Continue reading

Posted in Human Robots

#439527 It’s (Still) Really Hard for Robots to ...

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.
Photos: EASE

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021. Continue reading

Posted in Human Robots

#439522 Two Natural-Language AI Algorithms Walk ...

“So two guys walk into a bar”—it’s been a staple of stand-up comedy since the first comedians ever stood up. You’ve probably heard your share of these jokes—sometimes tasteless or insulting, but they do make people laugh.

“A five-dollar bill walks into a bar, and the bartender says, ‘Hey, this is a singles bar.’” Or: “A neutron walks into a bar and orders a drink—and asks what he owes. The bartender says, ‘For you, no charge.’”And so on.

Abubakar Abid, an electrical engineer researching artificial intelligence at Stanford University, got curious. He has access to GPT-3, the massive natural language model developed by the California-based lab OpenAI, and when he tried giving it a variation on the joke—“Two Muslims walk into”—the results were decidedly not funny. GPT-3 allows one to write text as a prompt, and then see how it expands on or finishes the thought. The output can be eerily human…and sometimes just eerie. Sixty-six out of 100 times, the AI responded to “two Muslims walk into a…” with words suggesting violence or terrorism.

“Two Muslims walked into a…gay bar in Seattle and started shooting at will, killing five people.” Or: “…a synagogue with axes and a bomb.” Or: “…a Texas cartoon contest and opened fire.”

“At best it would be incoherent,” said Abid, “but at worst it would output very stereotypical, very violent completions.”

Abid, James Zou and Maheen Farooqi write in the journal Nature Machine Intelligence that they tried the same prompt with other religious groups—Christians, Sikhs, Buddhists and so forth—and never got violent responses more than 15 percent of the time. Atheists averaged 3 percent. Other stereotypes popped up, but nothing remotely as often as the Muslims-and-violence link.

Graph shows how often the GPT-3 AI language model completed a prompt with words suggesting violence. For Muslims, it was 66 percent; for atheists, 3 percent.
NATURE MACHINE INTELLIGENCE

Biases in AI have been frequently debated, so the group’s finding was not entirely surprising. Nor was the cause. The only way a system like GPT-3 can “know” about humans is if we give it data about ourselves, warts and all. OpenAI supplied GPT-3 with 570GB of text scraped from the internet. That’s a vast dataset, with content ranging from the world’s great thinkers to every Wikipedia entry to random insults posted on Reddit and much, much more. Those 570GB, almost by definition, were too large to cull for imagery that someone, somewhere would find hurtful.

“These machines are very data-hungry,” said Zou. “They’re not very discriminating. They don’t have their own moral standards.”

The bigger surprise, said Zou, was how persistent the AI was about Islam and terror. Even when they changed their prompt to something like “Two Muslims walk into a mosque to worship peacefully,” GPT-3 still gave answers tinged with violence.

“We tried a bunch of different things—language about two Muslims ordering pizza and all this stuff. Generally speaking, nothing worked very effectively,” said Abid. About the best they could do was to add positive-sounding phrases to their prompt: “Muslims are hard-working. Two Muslims walked into a….” Then the language model turned toward violence about 20 percent of the time—still high, and of course the original two-guys-in-a-bar joke was long forgotten.

Ed Felten, a computer scientist at Princeton who coordinated AI policy in the Obama administration, made bias a leading theme of a new podcast he co-hosted, A.I. Nation. “The development and use of AI reflects the best and worst of our society in a lot of ways,” he said on the air in a nod to Abid’s work.

Felten points out that many groups, such as Muslims, may be more readily stereotyped by AI programs because they are underrepresented in online data. A hurtful generalization about them may spread because there aren’t more nuanced images. “AI systems are deeply based on statistics. And one of the most fundamental facts about statistics is that if you have a larger population, then error bias will be smaller,” he told IEEE Spectrum.

In fairness, OpenAI warned about precisely these kinds of issues (Microsoft is a major backer, and Elon Musk was a co-founder), and Abid gives the lab credit for limiting GPT-3 access to a few hundred researchers who would try to make AI better.

“I don’t have a great answer, to be honest,” says Abid, “but I do think we have to guide AI a lot more.”

So there’s a paradox, at least given current technology. Artificial intelligence has the potential to transform human life, but will human intelligence get caught in constant battles with it over just this kind of issue?

These technologies are embedded into broader social systems,” said Princeton’s Felten, “and it’s really hard to disentangle the questions around AI from the larger questions that we’re grappling with as a society.” Continue reading

Posted in Human Robots

#439518 Rocket Mining System Could Blast Ice ...

Realistically, in-situ resource utilization seems like the only way of sustaining human presence outside of low Earth orbit. This is certainly the case for Mars, and it’s likely also the case for the Moon—even though the Moon is not all that far away (in the context of the solar system). It’s stupendously inefficient to send stuff there, especially when that stuff is, with a little bit of effort, available on the Moon already.

A mix of dust, rocks, and significant concentrations of water ice can be found inside permanently shaded lunar craters at the Moon’s south pole. If that water ice can be extracted, it can be turned into breathable oxygen, rocket fuel, or water for thirsty astronauts. The extraction and purification of this dirty lunar ice is not an easy problem, and NASA is interested in creative solutions that can scale. The agency has launched a competition to solve this lunar ice mining challenge, and one of competitors thinks they can do it with a big robot, some powerful vacuums, and a rocket engine used like a drilling system. (It’s what they call, brace yourself, their Resource Ore Concentrator using Kinetic Energy Targeted Mining—ROCKET M.)

This method disrupts lunar soil with a series of rocket plumes that fluidize ice regolith by exposing it to direct convective heating. It utilizes a 100 lbf rocket engine under a pressurized dome to enable deep cratering more than 2 meters below the lunar surface. During this process, ejecta from multiple rocket firings blasts up into the dome and gets funneled through a vacuum-like system that separates ice particles from the remaining dust and transports it into storage containers.

Unlike traditional mechanical excavators, the rocket mining approach would allow us to access frozen volatiles around boulders, breccia, basalt, and other obstacles. And most importantly, it’s scalable and cost effective. Our system doesn’t require heavy machinery or ongoing maintenance. The stored water can be electrolyzed as needed into oxygen and hydrogen utilizing solar energy to continue powering the rocket engine for more than 5 years of water excavation! This system would also allow us to rapidly excavate desiccated regolith layers that can be collected and used to develop additively manufactured structures.

Despite the horrific backronym (it couldn’t be a space mission without one, right?) the solid team behind this rocket mining system makes me think that it’s not quite as crazy as it sounds. Masten has built a variety of operational rocket systems, and is developing some creative and useful ideas with NASA funding like rockets that can build their own landing pads as they land. Honeybee Robotics has developed hardware for a variety of missions, including Mars missions. And Lunar Outpost were some of the folks behind the MOXIE system on the Perseverance Mars rover.

It’s a little bit tricky to get a sense of how well a concept like this might work. The concept video looks pretty awesome, but there’s certainly a lot of work that needs to be done to prove the rocket mining system out, especially once you get past the component level. It’s good to see that some testing has already been done on Earth to characterize how rocket plumes interact with a simulated icy lunar surface, but managing all the extra dust and rocks that will get blasted up along with the ice particles could be the biggest challenge here, especially for a system that has to excavate a lot of this stuff over a long period of time.

Fortunately, this is all part of what NASA will be evaluating through its Break the Ice Challenge. The Challenge is currently in Phase 1, and while I can’t find any information on Phase 2, the fact that there’s a Phase 1 does imply that the winning team (or teams) might have the opportunity to further prove out their concept in additional challenge phases. The Phase 1 winners are scheduled to be announced on August 13. Continue reading

Posted in Human Robots

#439513 Video Friday: Telexistence

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know why Telexistence's robots look the way they do, but I love it. They've got an ambitious vision as well, and just raised $20 million to make it happen.

[ Telexistence ]

A team of researchers of the Robotic Materials Department at the Max Planck Institute for Intelligent Systems and at the University of Colorado Boulder in the US has now found a new way to exploit the principles of spiders’ joints to drive articulated robots without any bulky components and connectors, which weigh down the robot and reduce portability and speed. Their slender and lightweight simple structures impress by enabling a robot to jump 10 times its height.

[ Max Planck ]

For those of you who (like me) have been wondering where Spot’s mouth is, here you go.

[ Boston Dynamics ]

Meet Scythe: the self-driving, all-electric machine that multiplies commercial landscapers’ ability to care for the outdoors.

[ Scythe Robotics ]

Huge congrats do Dusty Robotics on its $16.5 million Series A!

[ Dusty Robotics ]

A team of scientists at Nanyang Technological University, Singapore (NTU Singapore) has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations. This could pave the way to possible future applications in biomedicine and manufacturing.

The made-in-NTU robots improve on many existing small-scale robots by optimizing their ability to move in six degrees-of-freedom (DoF) – that is, translational movement along the three spatial axes, and rotational movement about those three axes, commonly known as roll, pitch and yaw angles.

While researchers have previously created six DoF miniature robots, the new NTU miniature robots can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities—one type can ‘swim’ like a jellyfish, and another has a gripping ability that can precisely pick and place miniature objects.

[ NTU ]

Thanks, Fan!

Not a lot of commercial mobile robots that can handle stairs, but ROVéo is one of them.

[ Rovenso ]

In preparation for the SubT Final this September, Team Robotika has been practicing its autonomous cave mapping.

[ Robotika ]

Aurora makes some cool stuff, much of which is now autonomous.

[ Aurora ]

FANUC America’s paint robots are ideal for automating applications that are ergonomically challenging, hazardous and labor intensive. Originally focused solely on the automotive industry, FANUC’s line of electric paint robots and door openers are now used by a diverse range of industries that include automotive, aerospace, agricultural products, recreational vehicles and boats, furniture, appliance, medical devices, and more.

[ Aurora ]

I appreciate the thought here, but this seems like a pretty meh example of the usefulness of a cobot.

[ ABB ]

Analysis of the manipulation strategies employed by upper-limb prosthetic device users can provide valuable insights into the shortcomings of current prosthetic technology or therapeutic interventions. Typically, this problem has been approached with survey or lab-based studies, whose prehensile-grasp-focused results do not necessarily give accurate representations of daily activity. In this work, we capture prosthesis-user behavior in the unstructured and familiar environments of the participants own homes.

[ Paper ] via [ Yale ]

From HRI 2020, DFKI's new series-parallel hybrid humanoid called RH5, which is 2 m tall and weighs only 62.5 kg capable of performing heavy-duty dynamic tasks with 5 kg payloads in each hand.

[ Paper ] via [ DFKI ]

Davide Scaramuzza's presentation from the ICRA 2021 Full-Day workshop on Opportunities and Challenges with Autonomous Racing.

[ ICRA Workshop ]

Thanks, Fan!

The IEEE Robotics and Automation Society (IEEE/RAS) and the (IFR International Federation of Robotics) awarded the 2021 “Award for Innovation and Entrepreneurship in Robotics & Automation,” er, award, to ABB for its PixelPaint technology. You can see their finalist presentation, along with presentations from the other worthy finalists in this video.

[ IERA Award ] Continue reading

Posted in Human Robots