Philosophy

Because of man’s desire to create humanoid robots and androids “in it’s own image”, many philosophical, moral and ethical issues are raised.

Philosophy of Artificial Intelligence (A.I.)

This philosophy mainly attempts to answer these questions:

  1. Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  2. Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel?
  3. Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?

Other related questions include:

  1. Can a machine have emotions?
  2. Can a machine be self aware?
  3. Can a machine be original or creative?
  4. Can a machine be benevolent or hostile?
  5. Can a machine have a soul?

[Source]

Robot Morality

Hypotheses like “the Singularity” (the time when humanoid robots will be smarter than humans), or “self-awareness” (when humanoid robots attain the ability to think for itself) pose various potential moral concerns.

In science fiction, words like sentience (the ability to feel or perceive), sapience (“wisdom”) and consciousness (the ability to experience “feeling” and “self”) have been used to explain and hypothesize about humanoid robot morality.

Common science fiction morality themes include the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race; humanoid robots that are programmed to kill and destroy; robots that gain superhuman intelligence and abilities by upgrading their own software and hardware; and the reaction, sometimes called the “uncanny valley”, of unease and even revulsion at the sight of robots that mimic humans too closely.

Sci-fi novels like Do Androids Dream of Electric Sheep? by Philip K. Dick (1968) also brought many of these moral quandaries into the mainstream consciousness.

Roboethics

Roboethics is the human-centered ethics guiding the design, construction and use of the robots.

The first roboethics were arguably developed by Isaac Asimov in his Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

And since the quest for humanoid robots and androids marks the first time in human history that we are attempting to create an intelligent and autonomous entity to co-exist with, it is important that these intelligent machines comply with important human principles such as the Charters of Human Rights.

Roboethics also share with the other fields of science and technology most of the ethical problems derived from the Industrial Revolutions, such as the environmental impact of technology, dehumanization of humans in the relationship with the machines and anthropomorphization of the machines.

[Source]

Comments are closed.