Tag Archives: away
#437477 If a Robot Is Conscious, Is It OK to ...
In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?
The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.
Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.
As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.
Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.
On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.
Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.
It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.
Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.
Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.
In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.
Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.
Data is an android. How do these distinctions play out with respect to him?
The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.
Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.
He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.
However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.
Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.
For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.
In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.
Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.
But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?
Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.
There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ico Maker / Shutterstock.com Continue reading
#437471 How Giving Robots a Hybrid, Human-Like ...
Squeezing a lot of computing power into robots without using up too much space or energy is a constant battle for their designers. But a new approach that mimics the structure of the human brain could provide a workaround.
The capabilities of most of today’s mobile robots are fairly rudimentary, but giving them the smarts to do their jobs is still a serious challenge. Controlling a body in a dynamic environment takes a surprising amount of processing power, which requires both real estate for chips and considerable amounts of energy to power them.
As robots get more complex and capable, those demands are only going to increase. Today’s most powerful AI systems run in massive data centers across far more chips than can realistically fit inside a machine on the move. And the slow death of Moore’s Law suggests we can’t rely on conventional processors getting significantly more efficient or compact anytime soon.
That prompted a team from the University of Southern California to resurrect an idea from more than 40 years ago: mimicking the human brain’s division of labor between two complimentary structures. While the cerebrum is responsible for higher cognitive functions like vision, hearing, and thinking, the cerebellum integrates sensory data and governs movement, balance, and posture.
When the idea was first proposed the technology didn’t exist to make it a reality, but in a paper recently published in Science Robotics, the researchers describe a hybrid system that combines analog circuits that control motion and digital circuits that govern perception and decision-making in an inverted pendulum robot.
“Through this cooperation of the cerebrum and the cerebellum, the robot can conduct multiple tasks simultaneously with a much shorter latency and lower power consumption,” write the researchers.
The type of robot the researchers were experimenting with looks essentially like a pole balancing on a pair of wheels. They have a broad range of applications, from hoverboards to warehouse logistics—Boston Dynamics’ recently-unveiled Handle robot operates on the same principles. Keeping them stable is notoriously tough, but the new approach managed to significantly improve all digital control approaches by radically improving the speed and efficiency of computations.
Key to bringing the idea alive was the recent emergence of memristors—electrical components whose resistance relies on previous input, which allows them to combine computing and memory in one place in a way similar to how biological neurons operate.
The researchers used memristors to build an analog circuit that runs an algorithm responsible for integrating data from the robot’s accelerometer and gyroscope, which is crucial for detecting the angle and velocity of its body, and another that controls its motion. One key advantage of this setup is that the signals from the sensors are analog, so it does away with the need for extra circuitry to convert them into digital signals, saving both space and power.
More importantly, though, the analog system is an order of magnitude faster and more energy-efficient than a standard all-digital system, the authors report. This not only lets them slash the power requirements, but also lets them cut the processing loop from 3,000 microseconds to just 6. That significantly improves the robot’s stability, with it taking just one second to settle into a steady state compared to more than three seconds using the digital-only platform.
At the minute this is just a proof of concept. The robot the researchers have built is small and rudimentary, and the algorithms being run on the analog circuit are fairly basic. But the principle is a promising one, and there is currently a huge amount of R&D going into neuromorphic and memristor-based analog computing hardware.
As often turns out to be the case, it seems like we can’t go too far wrong by mimicking the best model of computation we have found so far: our own brains.
Image Credit: Photos Hobby / Unsplash Continue reading
#437301 The Global Work Crisis: Automation, the ...
The alarm bell rings. You open your eyes, come to your senses, and slide from dream state to consciousness. You hit the snooze button, and eventually crawl out of bed to the start of yet another working day.
This daily narrative is experienced by billions of people all over the world. We work, we eat, we sleep, and we repeat. As our lives pass day by day, the beating drums of the weekly routine take over and years pass until we reach our goal of retirement.
A Crisis of Work
We repeat the routine so that we can pay our bills, set our kids up for success, and provide for our families. And after a while, we start to forget what we would do with our lives if we didn’t have to go back to work.
In the end, we look back at our careers and reflect on what we’ve achieved. It may have been the hundreds of human interactions we’ve had; the thousands of emails read and replied to; the millions of minutes of physical labor—all to keep the global economy ticking along.
According to Gallup’s World Poll, only 15 percent of people worldwide are actually engaged with their jobs. The current state of “work” is not working for most people. In fact, it seems we as a species are trapped by a global work crisis, which condemns people to cast away their time just to get by in their day-to-day lives.
Technologies like artificial intelligence and automation may help relieve the work burdens of millions of people—but to benefit from their impact, we need to start changing our social structures and the way we think about work now.
The Specter of Automation
Automation has been ongoing since the Industrial Revolution. In recent decades it has taken on a more elegant guise, first with physical robots in production plants, and more recently with software automation entering most offices.
The driving goal behind much of this automation has always been productivity and hence, profits: technology that can act as a multiplier on what a single human can achieve in a day is of huge value to any company. Powered by this strong financial incentive, the quest for automation is growing ever more pervasive.
But if automation accelerates or even continues at its current pace and there aren’t strong social safety nets in place to catch the people who are negatively impacted (such as by losing their jobs), there could be a host of knock-on effects, including more concentrated wealth among a shrinking elite, more strain on government social support, an increase in depression and drug dependence, and even violent social unrest.
It seems as though we are rushing headlong into a major crisis, driven by the engine of accelerating automation. But what if instead of automation challenging our fragile status quo, we view it as the solution that can free us from the shackles of the Work Crisis?
The Way Out
In order to undertake this paradigm shift, we need to consider what society could potentially look like, as well as the problems associated with making this change. In the context of these crises, our primary aim should be for a system where people are not obligated to work to generate the means to survive. This removal of work should not threaten access to food, water, shelter, education, healthcare, energy, or human value. In our current system, work is the gatekeeper to these essentials: one can only access these (and even then often in a limited form), if one has a “job” that affords them.
Changing this system is thus a monumental task. This comes with two primary challenges: providing people without jobs with financial security, and ensuring they maintain a sense of their human value and worth. There are several measures that could be implemented to help meet these challenges, each with important steps for society to consider.
Universal basic income (UBI)
UBI is rapidly gaining support, and it would allow people to become shareholders in the fruits of automation, which would then be distributed more broadly.
UBI trials have been conducted in various countries around the world, including Finland, Kenya, and Spain. The findings have generally been positive on the health and well-being of the participants, and showed no evidence that UBI disincentivizes work, a common concern among the idea’s critics. The most recent popular voice for UBI has been that of former US presidential candidate Andrew Yang, who now runs a non-profit called Humanity Forward.
UBI could also remove wasteful bureaucracy in administering welfare payments (since everyone receives the same amount, there’s no need to prevent false claims), and promote the pursuit of projects aligned with peoples’ skill sets and passions, as well as quantifying the value of tasks not recognized by economic measures like Gross Domestic Product (GDP). This includes looking after children and the elderly at home.
How a UBI can be initiated with political will and social backing and paid for by governments has been hotly debated by economists and UBI enthusiasts. Variables like how much the UBI payments should be, whether to implement taxes such as Yang’s proposed valued added tax (VAT), whether to replace existing welfare payments, the impact on inflation, and the impact on “jobs” from people who would otherwise look for work require additional discussion. However, some have predicted the inevitability of UBI as a result of automation.
Universal healthcare
Another major component of any society is the healthcare of its citizens. A move away from work would further require the implementation of a universal healthcare system to decouple healthcare from jobs. Currently in the US, and indeed many other economies, healthcare is tied to employment.
Universal healthcare such as Medicare in Australia is evidence for the adage “prevention is better than cure,” when comparing the cost of healthcare in the US with Australia on a per capita basis. This has already presented itself as an advancement in the way healthcare is considered. There are further benefits of a healthier population, including less time and money spent on “sick-care.” Healthy people are more likely and more able to achieve their full potential.
Reshape the economy away from work-based value
One of the greatest challenges in a departure from work is for people to find value elsewhere in life. Many people view their identities as being inextricably tied to their jobs, and life without a job is therefore a threat to one’s sense of existence. This presents a shift that must be made at both a societal and personal level.
A person can only seek alternate value in life when afforded the time to do so. To this end, we need to start reducing “work-for-a-living” hours towards zero, which is a trend we are already seeing in Europe. This should not come at the cost of reducing wages pro rata, but rather could be complemented by UBI or additional schemes where people receive dividends for work done by automation. This transition makes even more sense when coupled with the idea of deviating from using GDP as a measure of societal growth, and instead adopting a well-being index based on universal human values like health, community, happiness, and peace.
The crux of this issue is in transitioning away from the view that work gives life meaning and life is about using work to survive, towards a view of living a life that itself is fulfilling and meaningful. This speaks directly to notions from Maslow’s hierarchy of needs, where work largely addresses psychological and safety needs such as shelter, food, and financial well-being. More people should have a chance to grow beyond the most basic needs and engage in self-actualization and transcendence.
The question is largely around what would provide people with a sense of value, and the answers would differ as much as people do; self-mastery, building relationships and contributing to community growth, fostering creativity, and even engaging in the enjoyable aspects of existing jobs could all come into play.
Universal education
With a move towards a society that promotes the values of living a good life, the education system would have to evolve as well. Researchers have long argued for a more nimble education system, but universities and even most online courses currently exist for the dominant purpose of ensuring people are adequately skilled to contribute to the economy. These “job factories” only exacerbate the Work Crisis. In fact, the response often given by educational institutions to the challenge posed by automation is to find new ways of upskilling students, such as ensuring they are all able to code. As alluded to earlier, this is a limited and unimaginative solution to the problem we are facing.
Instead, education should be centered on helping people acknowledge the current crisis of work and automation, teach them how to derive value that is decoupled from work, and enable people to embrace progress as we transition to the new economy.
Disrupting the Status Quo
While we seldom stop to think about it, much of the suffering faced by humanity is brought about by the systemic foe that is the Work Crisis. The way we think about work has brought society far and enabled tremendous developments, but at the same time it has failed many people. Now the status quo is threatened by those very developments as we progress to an era where machines are likely to take over many job functions.
This impending paradigm shift could be a threat to the stability of our fragile system, but only if it is not fully anticipated. If we prepare for it appropriately, it could instead be the key not just to our survival, but to a better future for all.
Image Credit: mostafa meraji from Pixabay Continue reading
#437276 Cars Will Soon Be Able to Sense and ...
Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.
Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.
Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.
What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?
Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.
Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.
Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.
Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.
But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).
Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?
Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.
And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.
Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.
A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.
Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.
Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.
Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.
Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.
In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.
Image Credit: Free-Photos from Pixabay Continue reading