Tag Archives: counter
#436176 We’re Making Progress in Explainable ...
Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.
But machine learning algorithms cannot explain the decisions they make.
How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions?
This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.
What Makes You Say That?
In some circumstances, you can see a road to explainable AI already. Take OpenAI’s GTP-2 model, or IBM’s Project Debater. Both of these generate text based on a large corpus of training data, and try to make it as relevant as possible to the prompt that’s given. If these models were also able to provide a quick run-down of the top few sources in that corpus of training data they were drawing information from, it may be easier to understand where the “argument” (or poetic essay about unicorns) was coming from.
This is similar to the approach Google is now looking at for its image classifiers. Many algorithms are more sensitive to textures and the relationship between adjacent pixels in an image, rather than recognizing objects by their outlines as humans do. This leads to strange results: some algorithms can happily identify a totally scrambled image of a polar bear, but not a polar bear silhouette.
Previous attempts to make image classifiers explainable relied on significance mapping. In this method, the algorithm would highlight the areas of the image that contributed the most statistical weight to making the decision. This is usually determined by changing groups of pixels in the image and seeing which contribute to the biggest change in the algorithm’s impression of what the image is. For example, if the algorithm is trying to recognize a stop sign, changing the background is unlikely to be as important as changing the sign.
Google’s new approach changes the way that its algorithm recognizes objects, by examining them at several different resolutions and searching for matches to different “sub-objects” within the main object. You or I might recognize an ambulance from its flashing lights, its tires, and its logo; we might zoom in on the basketball held by an NBA player to deduce their occupation, and so on. By linking the overall categorization of an image to these “concepts,” the algorithm can explain its decision: I categorized this as a cat because of its tail and whiskers.
Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.
Can You Explain What You Don’t Understand?
While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?
At one end of the spectrum, Good Old-Fashioned AI or GOFAI dreamed up machines that would be entirely based on symbolic logic. The machine would be hard-coded with the concept of a dog, a flower, cars, and so forth, alongside all of the symbolic “rules” which we internalize, allowing us to distinguish between dogs, flowers, and cars. (You can imagine a similar approach to a conversational AI would teach it words and strict grammatical structures from the top down, rather than “learning” languages from statistical associations between letters and words in training data, as GPT-2 broadly does.)
Such a system would be able to explain itself, because it would deal in high-level, human-understandable concepts. The equation is closer to: “ball” + “stitches” + “white” = “baseball”, rather than a set of millions of numbers linking various pathways together. There are elements of GOFAI in Google’s new approach to explaining its image recognition: the new algorithm can recognize objects based on the sub-objects they contain. To do this, it requires at least a rudimentary understanding of what those sub-objects look like, and the rules that link objects to sub-objects, such as “cats have whiskers.”
The issue, of course, is the—maybe impossible—labor-intensive task of defining all these symbolic concepts and every conceivable rule that could possibly link them together by hand. The difficulty of creating systems like this, which could handle the “combinatorial explosion” present in reality, helped to lead to the first AI winter.
Meanwhile, neural networks rely on training themselves on vast sets of data. Without the “labeling” of supervised learning, this process might bear no relation to any concepts a human could understand (and therefore be utterly inexplicable).
Somewhere between these two, hope explainable AI enthusiasts, is a happy medium that can crunch colossal amounts of data, giving us all of the benefits that recent, neural-network AI has bestowed, while showing its working in terms that humans can understand.
Image Credit: Image by Seanbatty from Pixabay Continue reading
#436123 A Path Towards Reasonable Autonomous ...
Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.
Autonomous Weapon Systems: A Roadmapping Exercise
Over the past several years, there has been growing awareness and discussion surrounding the possibility of future lethal autonomous weapon systems that could fundamentally alter humanity’s relationship with violence in war. Lethal autonomous weapons present a host of legal, ethical, moral, and strategic challenges. At the same time, artificial intelligence (AI) technology could be used in ways that improve compliance with the laws of war and reduce non-combatant harm. Since 2014, states have come together annually at the United Nations to discuss lethal autonomous weapons systems1. Additionally, a growing number of individuals and non-governmental organizations have become active in discussions surrounding autonomous weapons, contributing to a rapidly expanding intellectual field working to better understand these issues. While a wide range of regulatory options have been proposed for dealing with the challenge of lethal autonomous weapons, ranging from a preemptive, legally binding international treaty to reinforcing compliance with existing laws of war, there is as yet no international consensus on a way forward.
The lack of an international policy consensus, whether codified in a formal document or otherwise, poses real risks. States could fall victim to a security dilemma in which they deploy untested or unsafe weapons that pose risks to civilians or international stability. Widespread proliferation could enable illicit uses by terrorists, criminals, or rogue states. Alternatively, a lack of guidance on which uses of autonomy are acceptable could stifle valuable research that could reduce the risk of non-combatant harm.
International debate thus far has predominantly centered around whether or not states should adopt a preemptive, legally-binding treaty that would ban lethal autonomous weapons before they can be built. Some of the authors of this document have called for such a treaty and would heartily support it, if states were to adopt it. Other authors of this document have argued an overly expansive treaty would foreclose the possibility of using AI to mitigate civilian harm. Options for international action are not binary, however, and there are a range of policy options that states should consider between adopting a comprehensive treaty or doing nothing.
The purpose of this paper is to explore the possibility of a middle road. If a roadmap could garner sufficient stakeholder support to have significant beneficial impact, then what elements could it contain? The exercise whose results are presented below was not to identify recommendations that the authors each prefer individually (the authors hold a broad spectrum of views), but instead to identify those components of a roadmap that the authors are all willing to entertain2. We, the authors, invite policymakers to consider these components as they weigh possible actions to address concerns surrounding autonomous weapons3.
Summary of Issues Surrounding Autonomous Weapons
There are a variety of issues that autonomous weapons raise, which might lend themselves to different approaches. A non-exhaustive list of issues includes:
The potential for beneficial uses of AI and autonomy that could improve precision and reliability in the use of force and reduce non-combatant harm.
Uncertainty about the path of future technology and the likelihood of autonomous weapons being used in compliance with the laws of war, or international humanitarian law (IHL), in different settings and on various timelines.
A desire for some degree of human involvement in the use of force. This has been expressed repeatedly in UN discussions on lethal autonomous weapon systems in different ways.
Particular risks surrounding lethal autonomous weapons specifically targeting personnel as opposed to vehicles or materiel.
Risks regarding international stability.
Risk of proliferation to terrorists, criminals, or rogue states.
Risk that autonomous systems that have been verified to be acceptable can be made unacceptable through software changes.
The potential for autonomous weapons to be used as scalable weapons enabling a small number of individuals to inflict very large-scale casualties at low cost, either intentionally or accidentally.
Summary of Components
A time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems4. Such a moratorium could include exceptions for certain classes of weapons.
Define guiding principles for human involvement in the use of force.
Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states.
Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL compliance in the use of future weapons.
Component 1:
States should consider adopting a five-year, renewable moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems. Anti-personnel lethal autonomous weapon systems are defined as weapons systems that, once activated, can select and engage dismounted human targets without further intervention by a human operator, possibly excluding systems such as:
Fixed-point defensive systems with human supervisory control to defend human-occupied bases or installations
Limited, proportional, automated counter-fire systems that return fire in order to provide immediate, local defense of humans
Time-limited pursuit deterrent munitions or systems
Autonomous weapon systems with size above a specified explosive weight limit that select as targets hand-held weapons, such as rifles, machine guns, anti-tank weapons, or man-portable air defense systems, provided there is adequate protection for non-combatants and ensuring IHL compliance5
The moratorium would not apply to:
Anti-vehicle or anti-materiel weapons
Non-lethal anti-personnel weapons
Research on ways of improving autonomous weapon technology to reduce non-combatant harm in future anti-personnel lethal autonomous weapon systems
Weapons that find, track, and engage specific individuals whom a human has decided should be engaged within a limited predetermined period of time and geographic region
Motivation:
This moratorium would pause development and deployment of anti-personnel lethal autonomous weapons systems to allow states to better understand the systemic risks of their use and to perform research that improves their safety, understandability, and effectiveness. Particular objectives could be to:
ensure that, prior to deployment, anti-personnel lethal autonomous weapons can be used in ways that are equal to or outperform humans in their compliance with IHL (other conditions may also apply prior to deployment being acceptable);
lay the groundwork for a potentially legally binding diplomatic instrument; and
decrease the geopolitical pressure on countries to deploy anti-personnel lethal autonomous weapons before they are reliable and well-understood.
Compliance Verification:
As part of a moratorium, states could consider various approaches to compliance verification. Potential approaches include:
Developing an industry cooperation regime analogous to that mandated under the Chemical Weapons Convention, whereby manufacturers must know their customers and report suspicious purchases of significant quantities of items such as fixed-wing drones, quadcopters, and other weaponizable robots.
Encouraging states to declare inventories of autonomous weapons for the purposes of transparency and confidence-building.
Facilitating scientific exchanges and military-to-military contacts to increase trust, transparency, and mutual understanding on topics such as compliance verification and safe operation of autonomous systems.
Designing control systems to require operator identity authentication and unalterable records of operation; enabling post-hoc compliance checks in case of plausible evidence of non-compliant autonomous weapon attacks.
Relating the quantity of weapons to corresponding capacities for human-in-the-loop operation of those weapons.
Designing weapons with air-gapped firing authorization circuits that are connected to the remote human operator but not to the on-board automated control system.
More generally, avoiding weapon designs that enable conversion from compliant to non-compliant categories or missions solely by software updates.
Designing weapons with formal proofs of relevant properties—e.g., the property that the weapon is unable to initiate an attack without human authorization. Proofs can, in principle, be provided using cryptographic techniques that allow the proofs to be checked by a third party without revealing any details of the underlying software.
Facilitate access to (non-classified) AI resources (software, data, methods for ensuring safe operation) to all states that remain in compliance and participate in transparency activities.
Component 2:
Define and universalize guiding principles for human involvement in the use of force.
Humans, not machines, are legal and moral agents in military operations.
It is a human responsibility to ensure that any attack, including one involving autonomous weapons, complies with the laws of war.
Humans responsible for initiating an attack must have sufficient understanding of the weapons, the targets, the environment and the context for use to determine whether that particular attack is lawful.
The attack must be bounded in space, time, target class, and means of attack in order for the determination about the lawfulness of that attack to be meaningful.
Militaries must invest in training, education, doctrine, policies, system design, and human-machine interfaces to ensure that humans remain responsible for attacks.
Component 3:
Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
Specific potential measures include:
Developing safe rules for autonomous system behavior when in proximity to adversarial forces to avoid unintentional escalation or signaling. Examples include:
No-first-fire policy, so that autonomous weapons do not initiate hostilities without explicit human authorization.
A human must always be responsible for providing the mission for an autonomous system.
Taking steps to clearly distinguish exercises, patrols, reconnaissance, or other peacetime military operations from attacks in order to limit the possibility of reactions from adversary autonomous systems, such as autonomous air or coastal defenses.
Developing resilient communications links to ensure recallability of autonomous systems. Additionally, militaries should refrain from jamming others’ ability to recall their autonomous systems in order to afford the possibility of human correction in the event of unauthorized behavior.
Component 4:
Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states:
Targeted multilateral controls to prevent large-scale sale and transfer of weaponizable robots and related military-specific components for illicit use.
Employ measures to render weaponizable robots less harmful (e.g., geofencing; hard-wired kill switch; onboard control systems largely implemented in unalterable, non-reprogrammable hardware such as application-specific integrated circuits).
Component 5:
Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL-compliance in the use of future weapons, including:
Strategies to promote human moral engagement in decisions about the use of force
Risk assessment for autonomous weapon systems, including the potential for large-scale effects, geopolitical destabilization, accidental escalation, increased instability due to uncertainty about the relative military balance of power, and lowering thresholds to initiating conflict and for violence within conflict
Methodologies for ensuring the reliability and security of autonomous weapon systems
New techniques for verification, validation, explainability, characterization of failure conditions, and behavioral specifications.
About the Authors (in alphabetical order)
Ronald Arkin directs the Mobile Robot Laboratory at Georgia Tech.
Leslie Kaelbling is co-director of the Learning and Intelligent Systems Group at MIT.
Stuart Russell is a professor of computer science and engineering at UC Berkeley.
Dorsa Sadigh is an assistant professor of computer science and of electrical engineering at Stanford.
Paul Scharre directs the Technology and National Security Program at the Center for a New American Security (CNAS).
Bart Selman is a professor of computer science at Cornell.
Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW) Sydney.
The authors would like to thank Max Tegmark for organizing the three-day meeting from which this document was produced.
1 Autonomous Weapons System (AWS): A weapon system that, once activated, can select and engage targets without further intervention by a human operator. BACK TO TEXT↑
2 There is no implication that some authors would not personally support stronger recommendations. BACK TO TEXT↑
3 For ease of use, this working paper will frequently shorten “autonomous weapon system” to “autonomous weapon.” The terms should be treated as synonymous, with the understanding that “weapon” refers to the entire system: sensor, decision-making element, and munition. BACK TO TEXT↑
4 Anti-personnel lethal autonomous weapon system: A weapon system that, once activated, can select and engage dismounted human targets with lethal force and without further intervention by a human operator. BACK TO TEXT↑
5 The authors are not unanimous about this item because of concerns about ease of repurposing for mass-casualty missions targeting unarmed humans. The purpose of the lower limit on explosive payload weight would be to minimize the risk of such repurposing. There is precedent for using explosive weight limit as a mechanism of delineating between anti-personnel and anti-materiel weapons, such as the 1868 St. Petersburg Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight. BACK TO TEXT↑ Continue reading
#434655 Purposeful Evolution: Creating an ...
More often than not, we fall into the trap of trying to predict and anticipate the future, forgetting that the future is up to us to envision and create. In the words of Buckminster Fuller, “We are called to be architects of the future, not its victims.”
But how, exactly, do we create a “good” future? What does such a future look like to begin with?
In Future Consciousness: The Path to Purposeful Evolution, Tom Lombardo analytically deconstructs how we can flourish in the flow of evolution and create a prosperous future for humanity. Scientifically informed, the books taps into themes that are constructive and profound, from both eastern and western philosophies.
As the executive director of the Center for Future Consciousness and an executive board member and fellow of the World Futures Studies Federation, Lombardo has dedicated his life and career to studying how we can create a “realistic, constructive, and ethical future.”
In a conversation with Singularity Hub, Lombardo discussed purposeful evolution, ethical use of technology, and the power of optimism.
Raya Bidshahri: Tell me more about the title of your book. What is future consciousness and what role does it play in what you call purposeful evolution?
Tom Lombardo: Humans have the unique capacity to purposefully evolve themselves because they possess future consciousness. Future consciousness contains all of the cognitive, motivational, and emotional aspects of the human mind that pertain to the future. It’s because we can imagine and think about the future that we can manipulate and direct our future evolution purposefully. Future consciousness empowers us to become self-responsible in our own evolutionary future. This is a jump in the process of evolution itself.
RB: In several places in the book, you discuss the importance of various eastern philosophies. What can we learn from the east that is often missing from western models?
TL: The key idea in the east that I have been intrigued by for decades is the Taoist Yin Yang, which is the idea that reality should be conceptualized as interdependent reciprocities.
In the west we think dualistically, or we attempt to think in terms of one end of the duality to the exclusion of the other, such as whole versus parts or consciousness versus physical matter. Yin Yang thinking is seeing how both sides of a “duality,” even though they appear to be opposites, are interdependent; you can’t have one without the other. You can’t have order without chaos, consciousness without the physical world, individuals without the whole, humanity without technology, and vice versa for all these complementary pairs.
RB: You talk about the importance of chaos and destruction in the trajectory of human progress. In your own words, “Creativity frequently involves destruction as a prelude to the emergence of some new reality.” Why is this an important principle for readers to keep in mind, especially in the context of today’s world?
TL: In order for there to be progress, there often has to be a disintegration of aspects of the old. Although progress and evolution involve a process of building up, growth isn’t entirely cumulative; it’s also transformative. Things fall apart and come back together again.
Throughout history, we have seen a transformation of what are the most dominant human professions or vocations. At some point, almost everybody worked in agriculture, but most of those agricultural activities were replaced by machines, and a lot of people moved over to industry. Now we’re seeing that jobs and functions are increasingly automated in industry, and humans are being pushed into vocations that involve higher cognitive and artistic skills, services, information technology, and so on.
RB: You raise valid concerns about the dark side of technological progress, especially when it’s combined with mass consumerism, materialism, and anti-intellectualism. How do we counter these destructive forces as we shape the future of humanity?
TL: We can counter such forces by always thoughtfully considering how our technologies are affecting the ongoing purposeful evolution of our conscious minds, bodies, and societies. We should ask ourselves what are the ethical values that are being served by the development of various technologies.
For example, we often hear the criticism that technologies that are driven by pure capitalism degrade human life and only benefit the few people who invented and market them. So we need to also think about what good these new technologies can serve. It’s what I mean when I talk about the “wise cyborg.” A wise cyborg is somebody who uses technology to serve wisdom, or values connected with wisdom.
RB: Creating an ideal future isn’t just about progress in technology, but also progress in morality. How we do decide what a “good” future is? What are some philosophical tools we can use to determine a code of ethics that is as objective as possible?
TL: Let’s keep in mind that ethics will always have some level of subjectivity. That being said, the way to determine a good future is to base it on the best theory of reality that we have, which is that we are evolutionary beings in an evolutionary universe and we are interdependent with everything else in that universe. Our ethics should acknowledge that we are fluid and interactive.
Hence, the “good” can’t be something static, and it can’t be something that pertains to me and not everybody else. It can’t be something that only applies to humans and ignores all other life on Earth, and it must be a mode of change rather than something stable.
RB: You present a consciousness-centered approach to creating a good future for humanity. What are some of the values we should develop in order to create a prosperous future?
TL: A sense of self-responsibility for the future is critical. This means realizing that the “good future” is something we have to take upon ourselves to create; we can’t let something or somebody else do that. We need to feel responsible both for our own futures and for the future around us.
Another one is going to be an informed and hopeful optimism about the future, because both optimism and pessimism have self-fulfilling prophecy effects. If you hope for the best, you are more likely to look deeply into your reality and increase the chance of it coming out that way. In fact, all of the positive emotions that have to do with future consciousness actually make people more intelligent and creative.
Some other important character virtues are discipline and tenacity, deep purpose, the love of learning and thinking, and creativity.
RB: Are you optimistic about the future? If so, what informs your optimism?
I justify my optimism the same way that I have seen Ray Kurzweil, Peter Diamandis, Kevin Kelly, and Steven Pinker justify theirs. If we look at the history of human civilization and even the history of nature, we see a progressive motion forward toward greater complexity and even greater intelligence. There’s lots of ups and downs, and catastrophes along the way, but the facts of nature and human history support the long-term expectation of continued evolution into the future.
You don’t have to be unrealistic to be optimistic. It’s also, psychologically, the more empowering position. That’s the position we should take if we want to maximize the chances of our individual or collective reality turning out better.
A lot of pessimists are pessimistic because they’re afraid of the future. There are lots of reasons to be afraid, but all in all, fear disempowers, whereas hope empowers.
Image Credit: Quick Shot / Shutterstock.com
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading