Tag Archives: better

#432249 New Malicious AI Report Outlines Biggest ...

Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?

Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.

How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.

The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?

The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.

The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.

Some of the concerns the report explores are enhancements to familiar threats.

Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.

Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.

These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.

Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.

As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”

There are ways around this approach.

Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.

Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.

This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.

As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.

Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.

Image Credit: lolloj / Shutterstock.com Continue reading

Posted in Human Robots

#432236 Why Hasn’t AI Mastered Language ...

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com Continue reading

Posted in Human Robots

#432190 In the Future, There Will Be No Limit to ...

New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.

While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.

One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.

The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.

The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.

AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.

Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.

AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.

AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.

That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.

Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.

The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.

“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.

“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”

The research was published last month in Geophysical Research Letters.

Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.

Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.

“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.

His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.

Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.

King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.

“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.

Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.

For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.

Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.

In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.

ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:

King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.

“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”

Image Credit: Romaset / Shutterstock.com Continue reading

Posted in Human Robots

#431851 Bend it like Kengoro and Kenshiro

These Japanese humanoids can replicate flexible human-like movement during physical workouts like push-ups, crunches, stretches and other whole-body exercises, to help researchers better understand how humans move during athletic sports, aid in the development of artificial limbs and whole bodies, … Continue reading

Posted in Human Robots

#432036 The Power to Upgrade Our Own Biology Is ...

Upgrading our biology may sound like science fiction, but attempts to improve humanity actually date back thousands of years. Every day, we enhance ourselves through seemingly mundane activities such as exercising, meditating, or consuming performance-enhancing drugs, such as caffeine or adderall. However, the tools with which we upgrade our biology are improving at an accelerating rate and becoming increasingly invasive.

In recent decades, we have developed a wide array of powerful methods, such as genetic engineering and brain-machine interfaces, that are redefining our humanity. In the short run, such enhancement technologies have medical applications and may be used to treat many diseases and disabilities. Additionally, in the coming decades, they could allow us to boost our physical abilities or even digitize human consciousness.

What’s New?
Many futurists argue that our devices, such as our smartphones, are already an extension of our cortex and in many ways an abstract form of enhancement. According to philosophers Andy Clark and David Chalmers’ theory of extended mind, we use technology to expand the boundaries of the human mind beyond our skulls.

One can argue that having access to a smartphone enhances one’s cognitive capacities and abilities and is an indirect form of enhancement of its own. It can be considered an abstract form of brain-machine interface. Beyond that, wearable devices and computers are already accessible in the market, and people like athletes use them to boost their progress.

However, these interfaces are becoming less abstract.

Not long ago, Elon Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing brains to communicate with one another purely through brainwaves.

Ethical Challenges of Enhancement
There are many social and ethical implications of such advancements.

One of the most fundamental issues with cognitive and physical enhancement techniques is that they contradict the very definition of merit and success that society has relied on for millennia. Many forms of performance-enhancing drugs have been considered “cheating” for the longest time.

But perhaps we ought to revisit some of our fundamental assumptions as a society.

For example, we like to credit hard work and talent in a fair manner, where “fair” generally implies that an individual has acted in a way that has served him to merit his rewards. If you are talented and successful, it is considered to be because you chose to work hard and take advantage of the opportunities available to you. But by these standards, how much of our accomplishments can we truly be credited for?

For instance, the genetic lottery can have an enormous impact on an individual’s predisposition and personality, which can in turn affect factors such as motivation, reasoning skills, and other mental abilities. Many people are born with a natural ability or a physique that gives them an advantage in a particular area or predisposes them to learn faster. But is it justified to reward someone for excellence if their genes had a pivotal role in their path to success?

Beyond that, there are already many ways in which we take “shortcuts” to better mental performance. Seemingly mundane activities like drinking coffee, meditating, exercising, or sleeping well can boost one’s performance in any given area and are tolerated by society. Even the use of language can have positive physical and psychological effects on the human brain, which can be liberating to the individual and immensely beneficial to society at large. And let’s not forget the fact that some of us are born into more access to developing literacy than others.

Given all these reasons, one could argue that cognitive abilities and talents are currently derived more from uncontrollable factors and luck than we like to admit. If anything, technologies like brain-machine interfaces can enhance individual autonomy and allow one a choice of how capable they become.

As Karim Jebari points out (pdf), if a certain characteristic or trait is required to perform a particular role and an individual lacks this trait, would it be wrong to implement the trait through brain-machine interfaces or genetic engineering? How is this different from any conventional form of learning or acquiring a skill? If anything, this would be removing limitations on individuals that result from factors outside their control, such as biological predisposition (or even traits induced from traumatic experiences) to act or perform in a certain way.

Another major ethical concern is equality. As with any other emerging technology, there are valid concerns that cognitive enhancement tech will benefit only the wealthy, thus exacerbating current inequalities. This is where public policy and regulations can play a pivotal role in the impact of technology on society.

Enhancement technologies can either contribute to inequality or allow us to solve it. Educating and empowering the under-privileged can happen at a much more rapid rate, helping the overall rate of human progress accelerate. The “normal range” for human capacity and intelligence, however it is defined, could shift dramatically towards more positive trends.

Many have also raised concerns over the negative applications of government-led biological enhancement, including eugenics-like movements and super-soldiers. Naturally, there are also issues of safety, security, and well-being, especially within the early stages of experimentation with enhancement techniques.

Brain-machine interfaces, for instance, could have implications on autonomy. The interface involves using information extracted from the brain to stimulate or modify systems in order to accomplish a goal. This part of the process can be enhanced by implementing an artificial intelligence system onto the interface—one that exposes the possibility of a third party potentially manipulating individual’s personalities, emotions, and desires by manipulating the interface.

A Tool For Transcendence
It’s important to discuss these risks, not so that we begin to fear and avoid such technologies, but so that we continue to advance in a way that minimizes harm and allows us to optimize the benefits.

Stephen Hawking notes that “with genetic engineering, we will be able to increase the complexity of our DNA, and improve the human race.” Indeed, the potential advantages of modifying biology are revolutionary. Doctors would gain access to a powerful tool to tackle disease, allowing us to live longer and healthier lives. We might be able to extend our lifespan and tackle aging, perhaps a critical step to becoming a space-faring species. We may begin to modify the brain’s building blocks to become more intelligent and capable of solving grand challenges.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. Human enhancement is bringing us closer to such a world—it could allow us to take control of our evolution and truly shape the future of our species.

Image Credit: GrAl/ Shutterstock.com Continue reading

Posted in Human Robots