Tag Archives: humanity
#437816 As Algorithms Take Over More of the ...
Algorithms play an increasingly prominent part in our lives, governing everything from the news we see to the products we buy. As they proliferate, experts say, we need to make sure they don’t collude against us in damaging ways.
Fears of malevolent artificial intelligence plotting humanity’s downfall are a staple of science fiction. But there are plenty of nearer-term situations in which relatively dumb algorithms could do serious harm unintentionally, particularly when they are interlocked in complex networks of relationships.
In the economic sphere a high proportion of decision-making is already being offloaded to machines, and there have been warning signs of where that could lead if we’re not careful. The 2010 “Flash Crash,” where algorithmic traders helped wipe nearly $1 trillion off the stock market in minutes, is a textbook example, and widespread use of automated trading software has been blamed for the increasing fragility of markets.
But another important place where algorithms could undermine our economic system is in price-setting. Competitive markets are essential for the smooth functioning of the capitalist system that underpins Western society, which is why countries like the US have strict anti-trust laws that prevent companies from creating monopolies or colluding to build cartels that artificially inflate prices.
These regulations were built for an era when pricing decisions could always be traced back to a human, though. As self-adapting pricing algorithms increasingly decide the value of products and commodities, those laws are starting to look unfit for purpose, say the authors of a paper in Science.
Using algorithms to quickly adjust prices in a dynamic market is not a new idea—airlines have been using them for decades—but previously these algorithms operated based on rules that were hard-coded into them by programmers.
Today the pricing algorithms that underpin many marketplaces, especially online ones, rely on machine learning instead. After being set an overarching goal like maximizing profit, they develop their own strategies based on experience of the market, often with little human oversight. The most advanced also use forms of AI whose workings are opaque even if humans wanted to peer inside.
In addition, the public nature of online markets means that competitors’ prices are available in real time. It’s well-documented that major retailers like Amazon and Walmart are engaged in a never-ending bot war, using automated software to constantly snoop on their rivals’ pricing and inventory.
This combination of factors sets the stage perfectly for AI-powered pricing algorithms to adopt collusive pricing strategies, say the authors. If given free reign to develop their own strategies, multiple pricing algorithms with real-time access to each other’s prices could quickly learn that cooperating with each other is the best way to maximize profits.
The authors note that researchers have already found evidence that pricing algorithms will spontaneously develop collusive strategies in computer-simulated markets, and a recent study found evidence that suggests pricing algorithms may be colluding in Germany’s retail gasoline market. And that’s a problem, because today’s anti-trust laws are ill-suited to prosecuting this kind of behavior.
Collusion among humans typically involves companies communicating with each other to agree on a strategy that pushes prices above the true market value. They then develop rules to determine how they maintain this markup in a dynamic market that also incorporates the threat of retaliatory pricing to spark a price war if another cartel member tries to undercut the agreed pricing strategy.
Because of the complexity of working out whether specific pricing strategies or prices are the result of collusion, prosecutions have instead relied on communication between companies to establish guilt. That’s a problem because algorithms don’t need to communicate to collude, and as a result there are few legal mechanisms to prosecute this kind of collusion.
That means legal scholars, computer scientists, economists, and policymakers must come together to find new ways to uncover, prohibit, and prosecute the collusive rules that underpin this behavior, say the authors. Key to this will be auditing and testing pricing algorithms, looking for things like retaliatory pricing, price matching, and aggressive responses to price drops but not price rises.
Once collusive pricing rules are uncovered, computer scientists need to come up with ways to constrain algorithms from adopting them without sacrificing their clear efficiency benefits. It could also be helpful to make preventing this kind of collusive behavior the responsibility of the companies deploying them, with stiff penalties for those who don’t keep their algorithms in check.
One problem, though, is that algorithms may evolve strategies that humans would never think of, which could make spotting this behavior tricky. Imbuing courts with the technical knowledge and capacity to investigate this kind of evidence will also prove difficult, but getting to grips with these problems is an even more pressing challenge than it might seem at first.
While anti-competitive pricing algorithms could wreak havoc, there are plenty of other arenas where collusive AI could have even more insidious effects, from military applications to healthcare and insurance. Developing the capacity to predict and prevent AI scheming against us will likely be crucial going forward.
Image Credit: Pexels from Pixabay Continue reading
#437554 Ending the COVID-19 Pandemic
Photo: F.J. Jimenez/Getty Images
The approach of a new year is always a time to take stock and be hopeful. This year, though, reflection and hope are more than de rigueur—they’re rejuvenating. We’re coming off a year in which doctors, engineers, and scientists took on the most dire public threat in decades, and in the new year we’ll see the greatest results of those global efforts. COVID-19 vaccines are just months away, and biomedical testing is being revolutionized.
At IEEE Spectrum we focus on the high-tech solutions: Can artificial intelligence (AI) be used to diagnose COVID-19 using cough recordings? Can mathematical modeling determine whether preventive measures against COVID-19 work? Can big data and AI provide accurate pandemic forecasting?
Consider our story “AI Recognizes COVID-19 in the Sound of a Cough,” reported by Megan Scudellari in our Human OS blog. Using a cellphone-recorded cough, machine-learning models can now detect coronavirus with 90 percent accuracy, even in people with no symptoms. It’s a remarkable research milestone. This AI model sifts through hundreds of factors to distinguish the COVID-19 cough from those of bronchitis, whooping cough, and asthma.
But while such high-tech triumphs give us hope, the no-tech solutions are mostly what we have to work with. Soon, as our Numbers Don’t Lie columnist, Vaclav Smil, pointed out in a recent email, we will have near-instantaneous home testing, and we will have an ability to use big data to crunch every move and every outbreak. But we are nowhere near that yet. So let’s use, as he says, some old-fashioned kindergarten epidemiology, the no-tech measures, while we work to get there:
Masks: Wear them. If we all did so, we could cut transmission by two-thirds, perhaps even 80 percent.
Hands: Wash them.
Social distancing: If we could all stay home for two weeks, we could see enormous declines in COVID-19 transmission.
These are all time-tested solutions, proven effective ages ago in countless outbreaks of diseases including typhoid and cholera. They’re inexpensive and easy to prescribe, and the regimens are easy to follow.
The conflict between public health and individual rights and privacy, however, is less easy to resolve. Even during the pandemic of 1918–19, there was widespread resistance to mask wearing and social distancing. Fifty million people died—675,000 in the United States alone. Today, we are up to 240,000 deaths in the United States, and the end is not in sight. Antiflu measures were framed in 1918 as a way to protect the troops fighting in World War I, and people who refused to wear masks were called out as “dangerous slackers.” There was a world war, and yet it was still hard to convince people of the need for even such simple measures.
Personally, I have found the resistance to these easy fixes startling. I wouldn’t want maskless, gloveless doctors taking me through a surgical procedure. Or waltzing in from lunch without washing their hands. I’m sure you wouldn’t, either.
Science-based medicine has been one of the world’s greatest and most fundamental advances. In recent years, it has been turbocharged by breakthroughs in genetics technologies, advanced materials, high-tech diagnostics, and implants and other electronics-based interventions. Such leaps have already saved untold lives, but there’s much more to be done. And there will be many more pandemics ahead for humanity.
< Back to IEEE COVID-19 Resources Continue reading
#437477 If a Robot Is Conscious, Is It OK to ...
In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?
The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.
Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.
As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.
Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.
On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.
Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.
It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.
Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.
Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.
In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.
Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.
Data is an android. How do these distinctions play out with respect to him?
The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.
Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.
He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.
However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.
Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.
For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.
In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.
Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.
But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?
Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.
There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ico Maker / Shutterstock.com Continue reading