Tag Archives: Off
#437477 If a Robot Is Conscious, Is It OK to ...
In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?
The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.
Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.
As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.
Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.
On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.
Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.
It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.
Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.
Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.
In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.
Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.
Data is an android. How do these distinctions play out with respect to him?
The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.
Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.
He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.
However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.
Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.
For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.
In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.
Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.
But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?
Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.
There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ico Maker / Shutterstock.com Continue reading
#437345 Moore’s Law Lives: Intel Says Chips ...
If you weren’t already convinced the digital world is taking over, you probably are now.
To keep the economy on life support as people stay home to stem the viral tide, we’ve been forced to digitize interactions at scale (for better and worse). Work, school, events, shopping, food, politics. The companies at the center of the digital universe are now powerhouses of the modern era—worth trillions and nearly impossible to avoid in daily life.
Six decades ago, this world didn’t exist.
A humble microchip in the early 1960s would have boasted a handful of transistors. Now, your laptop or smartphone runs on a chip with billions of transistors. As first described by Moore’s Law, this is possible because the number of transistors on a chip doubled with extreme predictability every two years for decades.
But now progress is faltering as the size of transistors approaches physical limits, and the money and time it takes to squeeze a few more onto a chip are growing. There’ve been many predictions that Moore’s Law is, finally, ending. But, perhaps also predictably, the company whose founder coined Moore’s Law begs to differ.
In a keynote presentation at this year’s Hot Chips conference, Intel’s chief architect, Raja Koduri, laid out a roadmap to increase transistor density—that is, the number of transistors you can fit on a chip—by a factor of 50.
“We firmly believe there is a lot more transistor density to come,” Koduri said. “The vision will play out over time—maybe a decade or more—but it will play out.”
Why the optimism?
Calling the end of Moore’s Law is a bit of a tradition. As Peter Lee, vice president at Microsoft Research, quipped to The Economist a few years ago, “The number of people predicting the death of Moore’s Law doubles every two years.” To date, prophets of doom have been premature, and though the pace is slowing, the industry continues to dodge death with creative engineering.
Koduri believes the trend will continue this decade and outlined the upcoming chip innovations Intel thinks can drive more gains in computing power.
Keeping It Traditional
First, engineers can further shrink today’s transistors. Fin field effect transistors (or FinFET) first hit the scene in the 2010s and have since pushed chip features past 14 and 10 nanometers (or nodes, as such size checkpoints are called). Korduri said FinFET will again triple chip density before it’s exhausted.
The Next Generation
FinFET will hand the torch off to nanowire transistors (also known as gate-all-around transistors).
Here’s how they’ll work. A transistor is made up of three basic components: the source, where current is introduced, the gate and channel, where current selectively flows, and the drain. The gate is like a light switch. It controls how much current flows through the channel. A transistor is “on” when the gate allows current to flow, and it’s off when no current flows. The smaller transistors get, the harder it is to control that current.
FinFET maintained fine control of current by surrounding the channel with a gate on three sides. Nanowire designs kick that up a notch by surrounding the channel with a gate on four sides (hence, gate-all-around). They’ve been in the works for years and are expected around 2025. Koduri said first-generation nanowire transistors will be followed by stacked nanowire transistors, and together, they’ll quadruple transistor density.
Building Up
Growing transistor density won’t only be about shrinking transistors, but also going 3D.
This is akin to how skyscrapers increase a city’s population density by adding more usable space on the same patch of land. Along those lines, Intel recently launched its Foveros chip design. Instead of laying a chip’s various “neighborhoods” next to each other in a 2D silicon sprawl, they’ve stacked them on top of each other like a layer cake. Chip stacking isn’t entirely new, but it’s advancing and being applied to general purpose CPUs, like the chips in your phone and laptop.
Koduri said 3D chip stacking will quadruple transistor density.
A Self-Fulfilling Prophecy
The technologies Koduri outlines are an evolution of the same general technology in use today. That is, we don’t need quantum computing or nanotube transistors to augment or replace silicon chips yet. Rather, as it’s done many times over the years, the chip industry will get creative with the design of its core product to realize gains for another decade.
Last year, veteran chip engineer Jim Keller, who at the time was Intel’s head of silicon engineering but has since left the company, told MIT Technology Review there are over a 100 variables driving Moore’s Law (including 3D architectures and new transistor designs). From the standpoint of pure performance, it’s also about how efficiently software uses all those transistors. Keller suggested that with some clever software tweaks “we could get chips that are a hundred times faster in 10 years.”
But whether Intel’s vision pans out as planned is far from certain.
Intel’s faced challenges recently, taking five years instead of two to move its chips from 14 nanometers to 10 nanometers. After a delay of six months for its 7-nanometer chips, it’s now a year behind schedule and lagging other makers who already offer 7-nanometer chips. This is a key point. Yes, chipmakers continue making progress, but it’s getting harder, more expensive, and timelines are stretching.
The question isn’t if Intel and competitors can cram more transistors onto a chip—which, Intel rival TSMC agrees is clearly possible—it’s how long will it take and at what cost?
That said, demand for more computing power isn’t going anywhere.
Amazon, Microsoft, Alphabet, Apple, and Facebook now make up a whopping 20 percent of the stock market’s total value. By that metric, tech is the most dominant industry in at least 70 years. And new technologies—from artificial intelligence and virtual reality to a proliferation of Internet of Things devices and self-driving cars—will demand better chips.
There’s ample motivation to push computing to its bitter limits and beyond. As is often said, Moore’s Law is a self-fulfilling prophecy, and likely whatever comes after it will be too.
Image credit: Laura Ockel / Unsplash Continue reading