Tag Archives: dream

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots

#432568 Tech Optimists See a Golden ...

Technology evangelists dream about a future where we’re all liberated from the more mundane aspects of our jobs by artificial intelligence. Other futurists go further, imagining AI will enable us to become superhuman, enhancing our intelligence, abandoning our mortal bodies, and uploading ourselves to the cloud.

Paradise is all very well, although your mileage may vary on whether these scenarios are realistic or desirable. The real question is, how do we get there?

Economist John Maynard Keynes notably argued in favor of active intervention when an economic crisis hits, rather than waiting for the markets to settle down to a more healthy equilibrium in the long run. His rebuttal to critics was, “In the long run, we are all dead.” After all, if it takes 50 years of upheaval and economic chaos for things to return to normality, there has been an immense amount of human suffering first.

Similar problems arise with the transition to a world where AI is intimately involved in our lives. In the long term, automation of labor might benefit the human species immensely. But in the short term, it has all kinds of potential pitfalls, especially in exacerbating inequality within societies where AI takes on a larger role. A new report from the Institute for Public Policy Research has deep concerns about the future of work.

Uneven Distribution
While the report doesn’t foresee the same gloom and doom of mass unemployment that other commentators have considered, the concern is that the gains in productivity and economic benefits from AI will be unevenly distributed. In the UK, jobs that account for £290 billion worth of wages in today’s economy could potentially be automated with current technology. But these are disproportionately jobs held by people who are already suffering from social inequality.

Low-wage jobs are five times more likely to be automated than high-wage jobs. A greater proportion of jobs held by women are likely to be automated. The solution that’s often suggested is that people should simply “retrain”; but if no funding or assistance is provided, this burden is too much to bear. You can’t expect people to seamlessly transition from driving taxis to writing self-driving car software without help. As we have already seen, inequality is exacerbated when jobs that don’t require advanced education (even if they require a great deal of technical skill) are the first to go.

No Room for Beginners
Optimists say algorithms won’t replace humans, but will instead liberate us from the dull parts of our jobs. Lawyers used to have to spend hours trawling through case law to find legal precedents; now AI can identify the most relevant documents for them. Doctors no longer need to look through endless scans and perform diagnostic tests; machines can do this, leaving the decision-making to humans. This boosts productivity and provides invaluable tools for workers.

But there are issues with this rosy picture. If humans need to do less work, the economic incentive is for the boss to reduce their hours. Some of these “dull, routine” parts of the job were traditionally how people getting into the field learned the ropes: paralegals used to look through case law, but AI may render them obsolete. Even in the field of journalism, there’s now software that will rewrite press releases for publication, traditionally something close to an entry-level task. If there are no entry-level jobs, or if entry-level now requires years of training, the result is to exacerbate inequality and reduce social mobility.

Automating Our Biases
The adoption of algorithms into employment has already had negative impacts on equality. Cathy O’Neil, mathematics PhD from Harvard, raises these concerns in her excellent book Weapons of Math Destruction. She notes that algorithms designed by humans often encode the biases of that society, whether they’re racial or based on gender and sexuality.

Google’s search engine advertises more executive-level jobs to users it thinks are male. AI programs predict that black offenders are more likely to re-offend than white offenders; they receive correspondingly longer sentences. It needn’t necessarily be that bias has been actively programmed; perhaps the algorithms just learn from historical data, but this means they will perpetuate historical inequalities.

Take candidate-screening software HireVue, used by many major corporations to assess new employees. It analyzes “verbal and non-verbal cues” of candidates, comparing them to employees that historically did well. Either way, according to Cathy O’Neil, they are “using people’s fear and trust of mathematics to prevent them from asking questions.” With no transparency or understanding of how the algorithm generates its results, and no consensus over who’s responsible for the results, discrimination can occur automatically, on a massive scale.

Combine this with other demographic trends. In rich countries, people are living longer. An increasing burden will be placed on a shrinking tax base to support that elderly population. A recent study said that due to the accumulation of wealth in older generations, millennials stand to inherit more than any previous generation, but it won’t happen until they’re in their 60s. Meanwhile, those with savings and capital will benefit as the economy shifts: the stock market and GDP will grow, but wages and equality will fall, a situation that favors people who are already wealthy.

Even in the most dramatic AI scenarios, inequality is exacerbated. If someone develops a general intelligence that’s near-human or super-human, and they manage to control and monopolize it, they instantly become immensely wealthy and powerful. If the glorious technological future that Silicon Valley enthusiasts dream about is only going to serve to make the growing gaps wider and strengthen existing unfair power structures, is it something worth striving for?

What Makes a Utopia?
We urgently need to redefine our notion of progress. Philosophers worry about an AI that is misaligned—the things it seeks to maximize are not the things we want maximized. At the same time, we measure the development of our countries by GDP, not the quality of life of workers or the equality of opportunity in the society. Growing wealth with increased inequality is not progress.

Some people will take the position that there are always winners and losers in society, and that any attempt to redress the inequalities of our society will stifle economic growth and leave everyone worse off. Some will see this as an argument for a new economic model, based around universal basic income. Any moves towards this will need to take care that it’s affordable, sustainable, and doesn’t lead towards an entrenched two-tier society.

Walter Schiedel’s book The Great Leveller is a huge survey of inequality across all of human history, from the 21st century to prehistoric cave-dwellers. He argues that only revolutions, wars, and other catastrophes have historically reduced inequality: a perfect example is the Black Death in Europe, which (by reducing the population and therefore the labor supply that was available) increased wages and reduced inequality. Meanwhile, our solution to the financial crisis of 2007-8 may have only made the problem worse.

But in a world of nuclear weapons, of biowarfare, of cyberwarfare—a world of unprecedented, complex, distributed threats—the consequences of these “safety valves” could be worse than ever before. Inequality increases the risk of global catastrophe, and global catastrophes could scupper any progress towards the techno-utopia that the utopians dream of. And a society with entrenched inequality is no utopia at all.

Image Credit: OliveTree / Shutterstock.com Continue reading

Posted in Human Robots

#432303 What If the AI Revolution Is Neither ...

Why does everyone assume that the AI revolution will either lead to a fiery apocalypse or a glorious utopia, and not something in between? Of course, part of this is down to the fact that you get more attention by saying “The end is nigh!” or “Utopia is coming!”

But part of it is down to how humans think about change, especially unprecedented change. Millenarianism doesn’t have anything to do with being a “millennial,” being born in the 90s and remembering Buffy the Vampire Slayer. It is a way of thinking about the future that involves a deeply ingrained sense of destiny. A definition might be: “Millenarianism is the expectation that the world as it is will be destroyed and replaced with a perfect world, that a redeemer will come to cast down the evil and raise up the righteous.”

Millenarian beliefs, then, intimately link together the ideas of destruction and creation. They involve the idea of a huge, apocalyptic, seismic shift that will destroy the fabric of the old world and create something entirely new. Similar belief systems exist in many of the world’s major religions, and also the unspoken religion of some atheists and agnostics, which is a belief in technology.

Look at some futurist beliefs around the technological Singularity. In Ray Kurzweil’s vision, the Singularity is the establishment of paradise. Everyone is rendered immortal by biotechnology that can cure our ills; our brains can be uploaded to the cloud; inequality and suffering wash away under the wave of these technologies. The “destruction of the world” is replaced by a Silicon Valley buzzword favorite: disruption. And, as with many millenarian beliefs, your mileage varies on whether this destruction paves the way for a new utopia—or simply ends the world.

There are good reasons to be skeptical and interrogative towards this way of thinking. The most compelling reason is probably that millenarian beliefs seem to be a default mode of how humans think about change; just look at how many variants of this belief have cropped up all over the world.

These beliefs are present in aspects of Christian theology, although they only really became mainstream in their modern form in the 19th and 20th centuries. Ideas like the Tribulations—many years of hardship and suffering—before the Rapture, when the righteous will be raised up and the evil punished. After this destruction, the world will be made anew, or humans will ascend to paradise.

Despite being dogmatically atheist, Marxism has many of the same beliefs. It is all about a deterministic view of history that builds to a crescendo. In the same way as Rapture-believers look for signs that prophecies are beginning to be fulfilled, so Marxists look for evidence that we’re in the late stages of capitalism. They believe that, inevitably, society will degrade and degenerate to a breaking point—just as some millenarian Christians do.

In Marxism, this is when the exploitation of the working class by the rich becomes unsustainable, and the working class bands together and overthrows the oppressors. The “tribulation” is replaced by a “revolution.” Sometimes revolutionary figures, like Lenin, or Marx himself, are heralded as messiahs who accelerate the onset of the Millennium; and their rhetoric involves utterly smashing the old system such that a new world can be built. Of course, there is judgment, when the righteous workers take what’s theirs and the evil bourgeoisie are destroyed.

Even Norse mythology has an element of this, as James Hughes points out in his essay in Nick Bostrom’s book Global Catastrophic Risks. Ragnarok involves men and gods being defeated in a final, apocalyptic battle—but because that was a little bleak, they add in the idea that a new earth will arise where the survivors will live in harmony.

Judgement day is a cultural trope, too. Take the ancient Egyptians and their beliefs around the afterlife; the Lord of the underworld, Osiris, weighs the mortal’s heart against a feather. “Should the heart of the deceased prove to be heavy with wrongdoing, it would be eaten by a demon, and the hope of an afterlife vanished.”

Perhaps in the Singularity, something similar goes on. As our technology and hence our power improve, a final reckoning approaches: our hearts, as humans, will be weighed against a feather. If they prove too heavy with wrongdoing—with misguided stupidity, with arrogance and hubris, with evil—then we will fail the test, and we will destroy ourselves. But if we pass, and emerge from the Singularity and all of its threats and promises unscathed, then we will have paradise. And, like the other belief systems, there’s no room for non-believers; all of society is going to be radically altered, whether you want it to be or not, whether it benefits you or leaves you behind. A technological rapture.

It almost seems like every major development provokes this response. Nuclear weapons did, too. Either this would prove the final straw and we’d destroy ourselves, or the nuclear energy could be harnessed to build a better world. People talked at the dawn of the nuclear age about electricity that was “too cheap to meter.” The scientists who worked on the bomb often thought that with such destructive power in human hands, we’d be forced to cooperate and work together as a species.

When we see the same response over and over again to different circumstances, cropping up in different areas, whether it’s science, religion, or politics, we need to consider human biases. We like millenarian beliefs; and so when the idea of artificial intelligence outstripping human intelligence emerges, these beliefs spring up around it.

We don’t love facts. We don’t love information. We aren’t as rational as we’d like to think. We are creatures of narrative. Physicists observe the world and we weave our observations into narrative theories, stories about little billiard balls whizzing around and hitting each other, or space and time that bend and curve and expand. Historians try to make sense of an endless stream of events. We rely on stories: stories that make sense of the past, justify the present, and prepare us for the future.

And as stories go, the millenarian narrative is a brilliant and compelling one. It can lead you towards social change, as in the case of the Communists, or the Buddhist uprisings in China. It can justify your present-day suffering, if you’re in the tribulation. It gives you hope that your life is important and has meaning. It gives you a sense that things are evolving in a specific direction, according to rules—not just randomly sprawling outwards in a chaotic way. It promises that the righteous will be saved and the wrongdoers will be punished, even if there is suffering along the way. And, ultimately, a lot of the time, the millenarian narrative promises paradise.

We need to be wary of the millenarian narrative when we’re considering technological developments and the Singularity and existential risks in general. Maybe this time is different, but we’ve cried wolf many times before. There is a more likely, less appealing story. Something along the lines of: there are many possibilities, none of them are inevitable, and lots of the outcomes are less extreme than you might think—or they might take far longer than you think to arrive. On the surface, it’s not satisfying. It’s so much easier to think of things as either signaling the end of the world or the dawn of a utopia—or possibly both at once. It’s a narrative we can get behind, a good story, and maybe, a nice dream.

But dig a little below the surface, and you’ll find that the millenarian beliefs aren’t always the most promising ones, because they remove human agency from the equation. If you think that, say, the malicious use of algorithms, or the control of superintelligent AI, are serious and urgent problems that are worth solving, you can’t be wedded to a belief system that insists utopia or dystopia are inevitable. You have to believe in the shades of grey—and in your own ability to influence where we might end up. As we move into an uncertain technological future, we need to be aware of the power—and the limitations—of dreams.

Image Credit: Photobank gallery / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#432249 New Malicious AI Report Outlines Biggest ...

Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?

Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.

How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.

The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?

The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.

The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.

Some of the concerns the report explores are enhancements to familiar threats.

Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.

Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.

These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.

Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.

As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”

There are ways around this approach.

Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.

Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.

This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.

As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.

Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.

Image Credit: lolloj / Shutterstock.com Continue reading

Posted in Human Robots