#436123 A Path Towards Reasonable Autonomous ...Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety. Autonomous Weapon Systems: A Roadmapping ExerciseOver the past several years, there has been growing awareness and discussion surrounding the possibility of future lethal autonomous weapon systems that could fundamentally alter humanity’s relationship with violence in war. Lethal autonomous weapons present a host of legal, ethical, moral, and strategic challenges. At the same time, artificial intelligence (AI) technology could be used in ways that improve compliance with the laws of war and reduce non-combatant harm. Since 2014, states have come together annually at the United Nations to discuss lethal autonomous weapons systems1. Additionally, a growing number of individuals and non-governmental organizations have become active in discussions surrounding autonomous weapons, contributing to a rapidly expanding intellectual field working to better understand these issues. While a wide range of regulatory options have been proposed for dealing with the challenge of lethal autonomous weapons, ranging from a preemptive, legally binding international treaty to reinforcing compliance with existing laws of war, there is as yet no international consensus on a way forward. The lack of an international policy consensus, whether codified in a formal document or otherwise, poses real risks. States could fall victim to a security dilemma in which they deploy untested or unsafe weapons that pose risks to civilians or international stability. Widespread proliferation could enable illicit uses by terrorists, criminals, or rogue states. Alternatively, a lack of guidance on which uses of autonomy are acceptable could stifle valuable research that could reduce the risk of non-combatant harm. International debate thus far has predominantly centered around whether or not states should adopt a preemptive, legally-binding treaty that would ban lethal autonomous weapons before they can be built. Some of the authors of this document have called for such a treaty and would heartily support it, if states were to adopt it. Other authors of this document have argued an overly expansive treaty would foreclose the possibility of using AI to mitigate civilian harm. Options for international action are not binary, however, and there are a range of policy options that states should consider between adopting a comprehensive treaty or doing nothing. The purpose of this paper is to explore the possibility of a middle road. If a roadmap could garner sufficient stakeholder support to have significant beneficial impact, then what elements could it contain? The exercise whose results are presented below was not to identify recommendations that the authors each prefer individually (the authors hold a broad spectrum of views), but instead to identify those components of a roadmap that the authors are all willing to entertain2. We, the authors, invite policymakers to consider these components as they weigh possible actions to address concerns surrounding autonomous weapons3. Summary of Issues Surrounding Autonomous Weapons There are a variety of issues that autonomous weapons raise, which might lend themselves to different approaches. A non-exhaustive list of issues includes:
Summary of Components
Component 1: States should consider adopting a five-year, renewable moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems. Anti-personnel lethal autonomous weapon systems are defined as weapons systems that, once activated, can select and engage dismounted human targets without further intervention by a human operator, possibly excluding systems such as:
The moratorium would not apply to:
Motivation: This moratorium would pause development and deployment of anti-personnel lethal autonomous weapons systems to allow states to better understand the systemic risks of their use and to perform research that improves their safety, understandability, and effectiveness. Particular objectives could be to:
Compliance Verification: As part of a moratorium, states could consider various approaches to compliance verification. Potential approaches include:
Component 2: Define and universalize guiding principles for human involvement in the use of force.
Component 3: Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems. Specific potential measures include:
Component 4: Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states:
Component 5: Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL-compliance in the use of future weapons, including:
About the Authors (in alphabetical order) Ronald Arkin directs the Mobile Robot Laboratory at Georgia Tech. Leslie Kaelbling is co-director of the Learning and Intelligent Systems Group at MIT. Stuart Russell is a professor of computer science and engineering at UC Berkeley. Dorsa Sadigh is an assistant professor of computer science and of electrical engineering at Stanford. Paul Scharre directs the Technology and National Security Program at the Center for a New American Security (CNAS). Bart Selman is a professor of computer science at Cornell. Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW) Sydney. The authors would like to thank Max Tegmark for organizing the three-day meeting from which this document was produced. 1 Autonomous Weapons System (AWS): A weapon system that, once activated, can select and engage targets without further intervention by a human operator. BACK TO TEXT↑ 2 There is no implication that some authors would not personally support stronger recommendations. BACK TO TEXT↑ 3 For ease of use, this working paper will frequently shorten “autonomous weapon system” to “autonomous weapon.” The terms should be treated as synonymous, with the understanding that “weapon” refers to the entire system: sensor, decision-making element, and munition. BACK TO TEXT↑ 4 Anti-personnel lethal autonomous weapon system: A weapon system that, once activated, can select and engage dismounted human targets with lethal force and without further intervention by a human operator. BACK TO TEXT↑ 5 The authors are not unanimous about this item because of concerns about ease of repurposing for mass-casualty missions targeting unarmed humans. The purpose of the lower limit on explosive payload weight would be to minimize the risk of such repurposing. There is precedent for using explosive weight limit as a mechanism of delineating between anti-personnel and anti-materiel weapons, such as the 1868 St. Petersburg Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight. BACK TO TEXT↑
This entry was posted in Human Robots and tagged 2014, action, ai, application, apply, artificial, Artificial intelligence, assistant, attack, attention, automated, autonomous, back, balance, before, best, better, built, can, cases, center, challenge, chemical, come, communications, computer, computer science, consider, control, counter, day, debate, Defense, define, design, developing, development, different, discussion, drones, education, effects, element, engineering, environment, evidence, failure, field, first, five, following, forces, formal, forward, future, Georgia, group, hand, hard, hold, host, human, humanity, humans, ieee, Industry, intelligence, intelligent, international, jamming, lab, laboratory, large, law, laws, learning, less, low cost, machine, machines, making, man, mechanism, might, military, mit, mobile, national, new, operation, operator, opposed, order, part, party, portable, possible, power, professor, program, protection, real, reliability, reliable, remote, report, research, robot, robots, rogue, russell, safe, Safety, science, security, sensor, share, Should, since, small, software, south, Space, spectrum, Stanford, states, system, systems, taking, tech, technologies, technology, Three, time, topics, training, united, university, updates, vehicle, vehicles, wales, war, way, ways, weapon, website, weight, wired, Would, year, years. Bookmark the permalink.
|
-
Humanoid Gallery
Popular Searches
Copyright © 2024 Android Humanoid - All Rights Reserved
All trademarks and copyrights owned by their respective owners and are used for illustration only