Tag Archives: tech

#436167 Is it Time for Tech to Stop Moving Fast ...

On Monday, I attended the 2019 Fall Conference of Stanford’s Institute for Human Centered Artificial Intelligence (HAI). That same night I watched the Season 6 opener for the HBO TV show Silicon Valley. And the debates featured in both surrounded the responsibility of tech companies for the societal effects of the technologies they produce. The two events have jumbled together in my mind, perhaps because I was in a bit of a brain fog, thanks to the nasty combination of a head cold and the smoke that descended on Silicon Valley from the northern California wildfires. But perhaps that mixture turned out to be a good thing.

What is clear, in spite of the smoke, is that this issue is something a lot of people are talking about, inside and outside of Silicon Valley (witness the viral video of Rep. Alexandria Ocasio-Cortez (D-NY) grilling Facebook CEO Mark Zuckerberg).

So, to add to that conversation, here’s my HBO Silicon Valley/Stanford HAI conference mashup.

Silicon Valley’s fictional CEO Richard Hendriks, in the opening scene of the episode, tells Congress that Facebook, Google, and Amazon only care about exploiting personal data for profit. He states:

“These companies are kings, and they rule over kingdoms far larger than any nation in history.”

Meanwhile Marietje Schaake, former member of the European Parliament and a fellow at HAI, told the conference audience of 900:

“There is a lot of power in the hands of few actors—Facebook decides who is a news source, Microsoft will run the defense department’s cloud…. I believe we need a deeper debate about which tasks need to stay in the hands of the public.”

Eric Schmidt, former CEO and executive chairman of Google, agreed. He says:

“It is important that we debate now the ethics of what we are doing, and the impact of the technology that we are building.”

Stanford Associate Professor Ge Wang, also speaking at the HAI conference, pointed out:

“‘Doing no harm’ is a vital goal, and it is not easy. But it is different from a proactive goal, to ‘do good.’”

Had Silicon Valley’s Hendricks been there, he would have agreed. He said in the episode:

“Just because it’s successful, doesn’t mean it’s good. Hiroshima was a successful implementation.”

The speakers at the HAI conference discussed the implications of moving fast and breaking things, of putting untested and unregulated technology into the world now that we know that things like public trust and even democracy can be broken.

Google’s Schmidt told the HAI audience:

“I don’t think that everything that is possible should be put into the wild in society, we should answer the question, collectively, how much risk are we willing to take.

And Silicon Valley denizens real and fictional no longer think it’s OK to just say sorry afterwards. Says Schmidt:

“When you ask Facebook about various scandals, how can they still say ‘We are very sorry; we have a lot of learning to do.’ This kind of naiveté stands out of proportion to the power tech companies have. With great power should come great responsibility, or at least modesty.”

Schaake argued:

“We need more guarantees, institutions, and policies than stated good intentions. It’s about more than promises.”

Fictional CEO Hendricks thinks saying sorry is a cop-out as well. In the episode, a developer admits that his app collected user data in spite of Hendricks assuring Congress that his company doesn’t do that:

“You didn’t know at the time,” the developer says. “Don’t beat yourself up about it. But in the future, stop saying it. Or don’t; I don’t care. Maybe it will be like Google saying ‘Don’t be evil,’ or Facebook saying ‘I’m sorry, we’ll do better.’”

Hendricks doesn’t buy it:

“This stops now. I’m the boss, and this is over.”

(Well, he is fictional.)

How can government, the tech world, and the general public address this in a more comprehensive way? Out in the real world, the “what to do” discussion at Stanford HAI surrounded regulation—how much, what kind, and when.

Says the European Parliament’s Schaake:

“An often-heard argument is that government should refrain from regulating tech because [regulation] will stifle innovation. [That argument] implies that innovation is more important than democracy or the rule of law. Our problems don’t stem from over regulation, but under regulation of technologies.”

But when should that regulation happen. Stanford provost emeritus John Etchemendy, speaking from the audience at the HAI conference, said:

“I’ve been an advocate of not trying to regulate before you understand it. Like San Francisco banning of use of facial recognition is not a good example of regulation; there are uses of facial recognition that we should allow. We want regulations that are just right, that prevent the bad things and allow the good things. So we are going to get it wrong either way, if we regulate to soon or hold off, we will get some things wrong.”

Schaake would opt for regulating sooner rather than later. She says that she often hears the argument that it is too early to regulate artificial intelligence—as well as the argument that it is too late to regulate ad-based political advertising, or online privacy. Neither, to her, makes sense. She told the HAI attendees:

“We need more than guarantees than stated good intentions.”

U.S. Chief Technology Officer Michael Kratsios would go with later rather than sooner. (And, yes, the country has a CTO. President Barack Obama created the position in 2009; Kratsios is the fourth to hold the office and the first under President Donald Trump. He was confirmed in August.) Also speaking at the HAI conference, Kratsios argued:

“I don’t think we should be running to regulate anything. We are a leader [in technology] not because we had great regulations, but we have taken a free market approach. We have done great in driving innovation in technologies that are born free, like the Internet. Technologies born in captivity, like autonomous vehicles, lag behind.”

In the fictional world of HBO’s Silicon Valley, startup founder Hendricks has a solution—a technical one of course: the decentralized Internet. He tells Congress:

“The way we win is by creating a new, decentralized Internet, one where the behavior of companies like this will be impossible, forever. Where it is the users, not the kings, who have sovereign control over their data. I will help you build an Internet that is of the people, by the people, and for the people.”

(This is not a fictional concept, though it is a long way from wide use. Also called the decentralized Web, the concept takes the content on today’s Web and fragments it, and then replicates and scatters those fragments to hosts around the world, increasing privacy and reducing the ability of governments to restrict access.)

If neither regulation nor technology comes to make the world safe from the unforeseen effects of new technologies, there is one more hope, according to Schaake: the millennials and subsequent generations.

Tech companies can no longer pursue growth at all costs, not if they want to keep attracting the talent they need, says Schaake. She noted that, “the young generation looks at the environment, at homeless on the streets,” and they expect their companies to tackle those and other issues and make the world a better place. Continue reading

Posted in Human Robots

#436149 Blue Frog Robotics Answers (Some of) Our ...

In September of 2015, Buddy the social home robot closed its Indiegogo crowdfunding campaign more than 600 percent over its funding goal. A thousand people pledged for a robot originally scheduled to be delivered in December of 2016. But nearly three years later, the future of Buddy is still unclear. Last May, Blue Frog Robotics asked for forgiveness from its backers and announced the launch of an “equity crowdfunding campaign” to try to raise the additional funding necessary to deliver the robot in April of 2020.

By the time the crowdfunding campaign launched in August, the delivery date had slipped again, to September 2020, even as Blue Frog attempted to draw investors by estimating that sales of Buddy would “increase from 2000 robots in 2020 to 20,000 in 2023.” Blue Frog’s most recent communication with backers, in September, mentions a new CTO and a North American office, but does little to reassure backers of Buddy that they’ll ever be receiving their robot.

Backers of the robot are understandably concerned about the future of Buddy, so we sent a series of questions to the founder and CEO of Blue Frog Robotics, Rodolphe Hasselvander.

We’ve edited this interview slightly for clarity, but we should also note that Hasselvander was unable to provide answers to every question. In particular, we asked for some basic information about Blue Frog’s near-term financial plans, on which the entire future of Buddy seems to depend. We’ve left those questions in the interview anyway, along with Hasselvander’s response.

1. At this point, how much additional funding is necessary to deliver Buddy to backers?
2. Assuming funding is successful, when can backers expect to receive Buddy?
3. What happens if the fundraising goal is not met?
4. You estimate that sales of Buddy will increase 10x over three years. What is this estimate based on?

Rodolphe Hasselvander: Regarding the questions 1-4, unfortunately, as we are fundraising in a Regulation D, we do not comment on prospect, customer data, sales forecasts, or figures. Please refer to our press release here to have information about the fundraising.

5. Do you feel that you are currently being transparent enough about this process to satisfy backers?
6. Buddy’s launch date has moved from April 2020 to September 2020 over the last four months. Why should backers remain confident about Buddy’s schedule?

Since the last newsletter, we haven’t changed our communication, the backers will be the first to receive their Buddy, and we plan an official launch in September 2020.

7. What is the goal of My Buddy World?

At Blue Frog, we think that matching a great product with a big market can only happen through continual experimentation, iteration and incorporation of customer feedback. That’s why we created the forum My Buddy World. It has been designed for our Buddy Community to join us, discuss the world’s first emotional robot, and create with us. The objective is to deepen our conversation with Buddy’s fans and users, stay agile in testing our hypothesis and validate our product-market fit. We trust the value of collaboration. Behind Buddy, there is a team of roboticists, engineers, and programmers that are eager to know more about our consumers’ needs and are excited to work with them to create the perfect human/robot experience.

8. How is the current version of Buddy different from the 2015 version that backers pledged for during the successful crowdfunding campaign, in both hardware and software?

We have completely revised some parts of Buddy as well as replaced and/or added more accurate and reliable components to ensure we fully satisfy our customers’ requirements for a mature and high-quality robot from day one. We sourced more innovative components to make sure that Buddy has the most up-to-date technologies such as adding four microphones, a high def thermal matrix, a 3D camera, an 8-megapixel RGB camera, time-of-flight sensors, and touch sensors.
If you want more info, we just posted an article about what is Buddy here.

9. Will the version of Buddy that ships to backers in 2020 do everything that that was shown in the original crowdfunding video?

Concerning the capabilities of Buddy regarding the video published on YouTube, I confirm that Buddy will be able to do everything you can see, like patrol autonomously and secure your home, telepresence, mathematics applications, interactive stories for children, IoT/smart home management, face recognition, alarm clock, reminder, message/photo sharing, music, hands free call, people following, games like hide and seek (and more). In addition, everyone will be able to create their own apps thanks to the “BuddyLab” application.

10. What makes you confident that Buddy will be successful when Jibo, Kuri, and other social robots have not?

Consumer robotics is a new market. Some people think it is a tough one. But we, at Blue Frog Robotics, believe it is a path of learning, understanding, and finding new ways to serve consumers. Here are the five key factors that will make Buddy successful.

1) A market-fit robot

Blue Frog Robotics is a consumer-centric company. We know that a successful business model and a compelling fit to market Buddy must come up from solving consumers’ frustrations and problems in a way that’s new and exciting. We started from there.

By leveraged existing research and syndicated consumer data sets to understand our customers’ needs and aspirations, we get that creating a robot is not about the best tech innovation and features, but always about how well technology becomes a service to one’s basic human needs and assets: convenience, connection, security, fun, self-improvement, and time. To answer to these consumers’ needs and wants, we designed an all-in-one robot with four vital capabilities: intelligence, emotionality, mobility, and customization.

With his multi-purpose brain, he addresses a broad range of needs in modern-day life, from securing homes to carrying out his owners’ daily activities, from helping people with disabilities to educating children, from entertaining to just becoming a robot friend.

Buddy is a disruptive innovative robot that is about to transform the way we live, learn, utilize information, play, and even care about our health.
2) Endless possibilities

One of the major advantages of Buddy is his adaptability. Beyond to be adorable, playful, talkative, and to accompany anyone in their daily life at home whether you are comfortable with technology or not, he offers via his platform applications to engage his owners in a wide range of activities. From fitness to cooking, from health monitoring to education, from games to meditation, the combination of intelligence, sensors, mobility, multi-touch panel opens endless possibilities for consumers and organizations to adapt their Buddy to their own needs.
3) An affordable price

Buddy will be the first robot combining smart, social, and mobile capabilities and a developed platform with a personality to enter the U.S. market at affordable price.

Our competitors are social or assistant robots but rarely both. Competitors differentiate themselves by features: mobile, non-mobile; by shapes: humanoid or not; by skills: social versus smart; targeting a specific domain like entertainment, retail assistant, eldercare, or education for children; and by price. Regarding our six competitors: Moorebot, Elli-Q, and Olly are not mobile; Lynx and Nao are in toy category; Pepper is above $10k targeting B2B market; and finally, Temi can’t be considered an emotional robot.
Buddy remains highly differentiated as an all-in-one, best of his class experience, covering the needs for social interactions and assistance of his owners at each stage of their life at an affordable price.

The price range of Buddy will be between US $1700 and $2000.

4) A winning business model

Buddy’s great business model combines hardware, software, and services, and provides game-changing convenience for consumers, organizations, and developers.

Buddy offers a multi-sided value proposition focused on three vertical markets: direct consumers, corporations (healthcare, education, hospitality), and developers. The model creates engagement and sustained usage and produces stable and diverse cash flow.
5) A Passion for people and technology

From day one, we have always believed in the power of our dream: To bring the services and the fun of an emotional robot in every house, every hospital, in every care house. Each day, we refuse to think that we are stuck or limited; we work hard to make Buddy a reality that will help people all over the world and make them smile.

While we certainly appreciate Hasselvander’s consistent optimism and obvious enthusiasm, we’re obligated to point out that some of our most important questions were not directly answered. We haven’t learned anything that makes us all that much more confident that Blue Frog will be able to successfully deliver Buddy this time. Hasselvander also didn’t address our specific question about whether he feels like Blue Frog’s communication strategy with backers has been adequate, which is particularly relevant considering that over the four months between the last two newsletters, Buddy’s launch date slipped by six months.

At this point, all we can do is hope that the strategy Blue Frog has chosen will be successful. We’ll let you know if as soon as we learn more.

[ Buddy ] Continue reading

Posted in Human Robots

#436146 Video Friday: Kuka’s Robutt Is a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Kuka’s “robutt” can, according to the company, simulate “thousands of butts in the pursuit of durability and comfort.” Two of the robots are used at a Ford development center in Germany to evaluate new car seats. The tests are quite exhaustive, consisting of around 25,000 simulated sitting motions for each new seat design.” Or as Kuka puts it, “Pleasing all the butts on the planet is serious business.”

[ Kuka ]

Here’s a clever idea: 3D printing manipulators, and then using the 3D printer head to move those manipulators around and do stuff with them:

[ Paper ]

Two former soldiers performed a series of tests to see if the ONYX Exoskeleton gave them extra strength and endurance in difficult environments.

So when can I rent one of these to help me move furniture?

[ Lockheed ]

One of the defining characteristics of legged robots in general (and humanoid robots in particular) is the ability of walking on various types of terrain. In this video, we show our humanoid robot TORO walking dynamically over uneven (on grass outside the lab), rough (large gravel), and compliant terrain (a soft gym mattress). The robot can maintain its balance, even when the ground shifts rapidly under foot, such as when walking over gravel. This behaviour showcases the torque-control capability of quickly adapting the contact forces compared to position control methods.

An in-depth discussion of the current implementation is presented in the paper “Dynamic Walking on Compliant and Uneven Terrain using DCM and Passivity-based Whole-body Control”.

[ DLR RMC ]

Tsuki is a ROS-enabled quadruped designed and built by Lingkang Zhang. It’s completely position controlled, with no contact sensors on the feet, or even an IMU.

It can even do flips!

[ Tsuki ]

Thanks Lingkang!

TRI CEO Dr. Gill Pratt presents TRI’s contributions to Toyota’s New “LQ” Concept Vehicle, which includes onboard artificial intelligence agent “Yui” and LQ’s automated driving technology.

[ TRI ]

Hooman Hedayati wrote in to share some work (presented at HRI this year) on using augmented reality to make drone teleoperation more intuitive. Get a virtual drone to do what you want first, and then the real drone will follow.

[ Paper ]

Thanks Hooman!

You can now order a Sphero RVR for $250. It’s very much not spherical, but it does other stuff, so we’ll give it a pass.

[ Sphero ]

The AI Gamer Q56 robot is an expert at whatever this game is, using AI plus actual physical control manipulation. Watch until the end!

[ Bandai Namco ]

We present a swarm of autonomous flying robots for the exploration of unknown environments. The tiny robots do not make maps of their environment, but deal with obstacles on the fly. In robotics, the algorithms for navigating like this are called “bug algorithms”. The navigation of the robots involves them first flying away from the base station and later finding their way back with the help of a wireless beacon.

[ MAVLab ]

Okay Soft Robotics you successfully and disgustingly convinced us that vacuum grippers should never be used for food handling. Yuck!

[ Soft Robotics ]

Beyond the asteroid belt are “fossils of planet formation” known as the Trojan asteroids. These primitive bodies share Jupiter’s orbit in two vast swarms, and may hold clues to the formation and evolution of our solar system. Now, NASA is preparing to explore the Trojan asteroids for the first time. A mission called Lucy will launch in 2021 and visit seven asteroids over the course of twelve years – one in the main belt and six in Jupiter’s Trojan swarms.

[ NASA ]

I’m not all that impressed by this concept car from Lexus except that it includes some kind of super-thin autonomous luggage-carrying drone.

The LF-30 Electrified also carries the ‘Lexus Airporter’ drone-technology support vehicle. Using autonomous control, the Lexus Airporter is capable of such tasks as independently transporting baggage from a household doorstep to the vehicle’s luggage area.

[ Lexus ]

Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.

[ Ghost Robotics ]

Tech United Eindhoven is looking good for RoboCup@Home 2020.

[ Tech United ]

Penn engineers participated in the Subterranean (SubT) Challenge hosted by DARPA, the Defense Advanced Research Projects Agency. The goal of this Challenge is for teams to develop automated systems that can work in underground environments so they could be deployed after natural disasters or on dangerous search-and-rescue missions.

[ Team PLUTO ]

It’s BeetleCam vs White Rhinos in Kenya, and the White Rhinos don’t seem to mind at all.

[ Will Burrard-Lucas ] Continue reading

Posted in Human Robots

#436140 Let’s Build Robots That Are as Smart ...

Illustration: Nicholas Little

Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.

Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.

But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.

Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.

Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.

Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.

Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.

When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.

The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.

Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.

This article appears in the November 2019 print issue as “Robots as Smart as Babies.” Continue reading

Posted in Human Robots

#436123 A Path Towards Reasonable Autonomous ...

Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.

Autonomous Weapon Systems: A Roadmapping Exercise
Over the past several years, there has been growing awareness and discussion surrounding the possibility of future lethal autonomous weapon systems that could fundamentally alter humanity’s relationship with violence in war. Lethal autonomous weapons present a host of legal, ethical, moral, and strategic challenges. At the same time, artificial intelligence (AI) technology could be used in ways that improve compliance with the laws of war and reduce non-combatant harm. Since 2014, states have come together annually at the United Nations to discuss lethal autonomous weapons systems1. Additionally, a growing number of individuals and non-governmental organizations have become active in discussions surrounding autonomous weapons, contributing to a rapidly expanding intellectual field working to better understand these issues. While a wide range of regulatory options have been proposed for dealing with the challenge of lethal autonomous weapons, ranging from a preemptive, legally binding international treaty to reinforcing compliance with existing laws of war, there is as yet no international consensus on a way forward.

The lack of an international policy consensus, whether codified in a formal document or otherwise, poses real risks. States could fall victim to a security dilemma in which they deploy untested or unsafe weapons that pose risks to civilians or international stability. Widespread proliferation could enable illicit uses by terrorists, criminals, or rogue states. Alternatively, a lack of guidance on which uses of autonomy are acceptable could stifle valuable research that could reduce the risk of non-combatant harm.

International debate thus far has predominantly centered around whether or not states should adopt a preemptive, legally-binding treaty that would ban lethal autonomous weapons before they can be built. Some of the authors of this document have called for such a treaty and would heartily support it, if states were to adopt it. Other authors of this document have argued an overly expansive treaty would foreclose the possibility of using AI to mitigate civilian harm. Options for international action are not binary, however, and there are a range of policy options that states should consider between adopting a comprehensive treaty or doing nothing.

The purpose of this paper is to explore the possibility of a middle road. If a roadmap could garner sufficient stakeholder support to have significant beneficial impact, then what elements could it contain? The exercise whose results are presented below was not to identify recommendations that the authors each prefer individually (the authors hold a broad spectrum of views), but instead to identify those components of a roadmap that the authors are all willing to entertain2. We, the authors, invite policymakers to consider these components as they weigh possible actions to address concerns surrounding autonomous weapons3.

Summary of Issues Surrounding Autonomous Weapons

There are a variety of issues that autonomous weapons raise, which might lend themselves to different approaches. A non-exhaustive list of issues includes:

The potential for beneficial uses of AI and autonomy that could improve precision and reliability in the use of force and reduce non-combatant harm.
Uncertainty about the path of future technology and the likelihood of autonomous weapons being used in compliance with the laws of war, or international humanitarian law (IHL), in different settings and on various timelines.
A desire for some degree of human involvement in the use of force. This has been expressed repeatedly in UN discussions on lethal autonomous weapon systems in different ways.
Particular risks surrounding lethal autonomous weapons specifically targeting personnel as opposed to vehicles or materiel.
Risks regarding international stability.
Risk of proliferation to terrorists, criminals, or rogue states.
Risk that autonomous systems that have been verified to be acceptable can be made unacceptable through software changes.
The potential for autonomous weapons to be used as scalable weapons enabling a small number of individuals to inflict very large-scale casualties at low cost, either intentionally or accidentally.

Summary of Components

A time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems4. Such a moratorium could include exceptions for certain classes of weapons.
Define guiding principles for human involvement in the use of force.
Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states.
Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL compliance in the use of future weapons.

Component 1:

States should consider adopting a five-year, renewable moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems. Anti-personnel lethal autonomous weapon systems are defined as weapons systems that, once activated, can select and engage dismounted human targets without further intervention by a human operator, possibly excluding systems such as:

Fixed-point defensive systems with human supervisory control to defend human-occupied bases or installations
Limited, proportional, automated counter-fire systems that return fire in order to provide immediate, local defense of humans
Time-limited pursuit deterrent munitions or systems
Autonomous weapon systems with size above a specified explosive weight limit that select as targets hand-held weapons, such as rifles, machine guns, anti-tank weapons, or man-portable air defense systems, provided there is adequate protection for non-combatants and ensuring IHL compliance5

The moratorium would not apply to:

Anti-vehicle or anti-materiel weapons
Non-lethal anti-personnel weapons
Research on ways of improving autonomous weapon technology to reduce non-combatant harm in future anti-personnel lethal autonomous weapon systems
Weapons that find, track, and engage specific individuals whom a human has decided should be engaged within a limited predetermined period of time and geographic region

Motivation:

This moratorium would pause development and deployment of anti-personnel lethal autonomous weapons systems to allow states to better understand the systemic risks of their use and to perform research that improves their safety, understandability, and effectiveness. Particular objectives could be to:

ensure that, prior to deployment, anti-personnel lethal autonomous weapons can be used in ways that are equal to or outperform humans in their compliance with IHL (other conditions may also apply prior to deployment being acceptable);
lay the groundwork for a potentially legally binding diplomatic instrument; and
decrease the geopolitical pressure on countries to deploy anti-personnel lethal autonomous weapons before they are reliable and well-understood.

Compliance Verification:

As part of a moratorium, states could consider various approaches to compliance verification. Potential approaches include:

Developing an industry cooperation regime analogous to that mandated under the Chemical Weapons Convention, whereby manufacturers must know their customers and report suspicious purchases of significant quantities of items such as fixed-wing drones, quadcopters, and other weaponizable robots.
Encouraging states to declare inventories of autonomous weapons for the purposes of transparency and confidence-building.
Facilitating scientific exchanges and military-to-military contacts to increase trust, transparency, and mutual understanding on topics such as compliance verification and safe operation of autonomous systems.
Designing control systems to require operator identity authentication and unalterable records of operation; enabling post-hoc compliance checks in case of plausible evidence of non-compliant autonomous weapon attacks.
Relating the quantity of weapons to corresponding capacities for human-in-the-loop operation of those weapons.
Designing weapons with air-gapped firing authorization circuits that are connected to the remote human operator but not to the on-board automated control system.
More generally, avoiding weapon designs that enable conversion from compliant to non-compliant categories or missions solely by software updates.
Designing weapons with formal proofs of relevant properties—e.g., the property that the weapon is unable to initiate an attack without human authorization. Proofs can, in principle, be provided using cryptographic techniques that allow the proofs to be checked by a third party without revealing any details of the underlying software.
Facilitate access to (non-classified) AI resources (software, data, methods for ensuring safe operation) to all states that remain in compliance and participate in transparency activities.

Component 2:

Define and universalize guiding principles for human involvement in the use of force.

Humans, not machines, are legal and moral agents in military operations.
It is a human responsibility to ensure that any attack, including one involving autonomous weapons, complies with the laws of war.
Humans responsible for initiating an attack must have sufficient understanding of the weapons, the targets, the environment and the context for use to determine whether that particular attack is lawful.
The attack must be bounded in space, time, target class, and means of attack in order for the determination about the lawfulness of that attack to be meaningful.
Militaries must invest in training, education, doctrine, policies, system design, and human-machine interfaces to ensure that humans remain responsible for attacks.

Component 3:

Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.

Specific potential measures include:

Developing safe rules for autonomous system behavior when in proximity to adversarial forces to avoid unintentional escalation or signaling. Examples include:

No-first-fire policy, so that autonomous weapons do not initiate hostilities without explicit human authorization.
A human must always be responsible for providing the mission for an autonomous system.
Taking steps to clearly distinguish exercises, patrols, reconnaissance, or other peacetime military operations from attacks in order to limit the possibility of reactions from adversary autonomous systems, such as autonomous air or coastal defenses.

Developing resilient communications links to ensure recallability of autonomous systems. Additionally, militaries should refrain from jamming others’ ability to recall their autonomous systems in order to afford the possibility of human correction in the event of unauthorized behavior.

Component 4:

Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states:

Targeted multilateral controls to prevent large-scale sale and transfer of weaponizable robots and related military-specific components for illicit use.
Employ measures to render weaponizable robots less harmful (e.g., geofencing; hard-wired kill switch; onboard control systems largely implemented in unalterable, non-reprogrammable hardware such as application-specific integrated circuits).

Component 5:

Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL-compliance in the use of future weapons, including:

Strategies to promote human moral engagement in decisions about the use of force
Risk assessment for autonomous weapon systems, including the potential for large-scale effects, geopolitical destabilization, accidental escalation, increased instability due to uncertainty about the relative military balance of power, and lowering thresholds to initiating conflict and for violence within conflict
Methodologies for ensuring the reliability and security of autonomous weapon systems
New techniques for verification, validation, explainability, characterization of failure conditions, and behavioral specifications.

About the Authors (in alphabetical order)

Ronald Arkin directs the Mobile Robot Laboratory at Georgia Tech.

Leslie Kaelbling is co-director of the Learning and Intelligent Systems Group at MIT.

Stuart Russell is a professor of computer science and engineering at UC Berkeley.

Dorsa Sadigh is an assistant professor of computer science and of electrical engineering at Stanford.

Paul Scharre directs the Technology and National Security Program at the Center for a New American Security (CNAS).

Bart Selman is a professor of computer science at Cornell.

Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW) Sydney.

The authors would like to thank Max Tegmark for organizing the three-day meeting from which this document was produced.

1 Autonomous Weapons System (AWS): A weapon system that, once activated, can select and engage targets without further intervention by a human operator. BACK TO TEXT↑

2 There is no implication that some authors would not personally support stronger recommendations. BACK TO TEXT↑

3 For ease of use, this working paper will frequently shorten “autonomous weapon system” to “autonomous weapon.” The terms should be treated as synonymous, with the understanding that “weapon” refers to the entire system: sensor, decision-making element, and munition. BACK TO TEXT↑

4 Anti-personnel lethal autonomous weapon system: A weapon system that, once activated, can select and engage dismounted human targets with lethal force and without further intervention by a human operator. BACK TO TEXT↑

5 The authors are not unanimous about this item because of concerns about ease of repurposing for mass-casualty missions targeting unarmed humans. The purpose of the lower limit on explosive payload weight would be to minimize the risk of such repurposing. There is precedent for using explosive weight limit as a mechanism of delineating between anti-personnel and anti-materiel weapons, such as the 1868 St. Petersburg Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight. BACK TO TEXT↑ Continue reading

Posted in Human Robots