Tag Archives: hacking
#432249 New Malicious AI Report Outlines Biggest ...
Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?
Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.
How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.
The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?
The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.
The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.
Some of the concerns the report explores are enhancements to familiar threats.
Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.
Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.
These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.
Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.
As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”
There are ways around this approach.
Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.
Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.
This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.
As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.
Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.
Image Credit: lolloj / Shutterstock.com Continue reading
#431427 Why the Best Healthcare Hacks Are the ...
Technology has the potential to solve some of our most intractable healthcare problems. In fact, it’s already doing so, with inventions getting us closer to a medical Tricorder, and progress toward 3D printed organs, and AIs that can do point-of-care diagnosis.
No doubt these applications of cutting-edge tech will continue to push the needle on progress in medicine, diagnosis, and treatment. But what if some of the healthcare hacks we need most aren’t high-tech at all?
According to Dr. Darshak Sanghavi, this is exactly the case. In a talk at Singularity University’s Exponential Medicine last week, Sanghavi told the audience, “We often think in extremely complex ways, but I think a lot of the improvements in health at scale can be done in an analog way.”
Sanghavi is the chief medical officer and senior vice president of translation at OptumLabs, and was previously director of preventive and population health at the Center for Medicare and Medicaid Innovation, where he oversaw the development of large pilot programs aimed at improving healthcare costs and quality.
“How can we improve health at scale, not for only a small number of people, but for entire populations?” Sanghavi asked. With programs that benefit a small group of people, he explained, what tends to happen is that the average health of a population improves, but the disparities across the group worsen.
“My mantra became, ‘The denominator is everybody,’” he said. He shared details of some low-tech but crucial fixes he believes could vastly benefit the US healthcare system.
1. Regulatory Hacking
Healthcare regulations are ultimately what drive many aspects of patient care, for better or worse. Worse because the mind-boggling complexity of regulations (exhibit A: the Affordable Care Act is reportedly about 20,000 pages long) can make it hard for people to get the care they need at a cost they can afford, but better because, as Sanghavi explained, tweaking these regulations in the right way can result in across-the-board improvements in a given population’s health.
An adjustment to Medicare hospitalization rules makes for a relevant example. The code was updated to state that if people who left the hospital were re-admitted within 30 days, that hospital had to pay a penalty. The result was hospitals taking more care to ensure patients were released not only in good health, but also with a solid understanding of what they had to do to take care of themselves going forward. “Here, arguably the writing of a few lines of regulatory code resulted in a remarkable decrease in 30-day re-admissions, and the savings of several billion dollars,” Sanghavi said.
2. Long-Term Focus
It’s easy to focus on healthcare hacks that have immediate, visible results—but what about fixes whose benefits take years to manifest? How can we motivate hospitals, regulators, and doctors to take action when they know they won’t see changes anytime soon?
“I call this the reality TV problem,” Sanghavi said. “Reality shows don’t really care about who’s the most talented recording artist—they care about getting the most viewers. That is exactly how we think about health care.”
Sanghavi’s team wanted to address this problem for heart attacks. They found they could reliably determine someone’s 10-year risk of having a heart attack based on a simple risk profile. Rather than monitoring patients’ cholesterol, blood pressure, weight, and other individual factors, the team took the average 10-year risk across entire provider panels, then made providers responsible for controlling those populations.
“Every percentage point you lower that risk, by hook or by crook, you get some people to stop smoking, you get some people on cholesterol medication. It’s patient-centered decision-making, and the provider then makes money. This is the world’s first predictive analytic model, at scale, that’s actually being paid for at scale,” he said.
3. Aligned Incentives
If hospitals are held accountable for the health of the communities they’re based in, those hospitals need to have the right incentives to follow through. “Hospitals have to spend money on community benefit, but linking that benefit to a meaningful population health metric can catalyze significant improvements,” Sanghavi said.
Darshak Sanghavi speaking at Singularity University’s 2017 Exponential Medicine Summit in San Diego, CA.
He used smoking cessation as an example. His team designed a program where hospitals were given a score (determined by the Centers for Disease Control and Prevention) based on the smoking rate in the counties where they’re located, then given monetary incentives to improve their score. Improving their score, in turn, resulted in better health for their communities, which meant fewer patients to treat for smoking-related health problems.
4. Social Determinants of Health
Social determinants of health include factors like housing, income, family, and food security. The answer to getting people to pay attention to these factors at scale, and creating aligned incentives, Sanghavi said, is “Very simple. We just have to measure it to start with, and measure it universally.”
His team was behind a $157 million pilot program called Accountable Health Communities that went live this year. The program requires all Medicare and Medicaid beneficiaries get screened for various social determinants of health. With all that data being collected, analysts can pinpoint local trends, then target funds to address the underlying problem, whether it’s job training, drug use, or nutritional education. “You’re then free to invest the dollars where they’re needed…this is how we can improve health at scale, with very simple changes in the incentive structures that are created,” he said.
5. ‘Securitizing’ Public Health
Sanghavi’s final point tied back to his discussion of aligning incentives. As misguided as it may seem, the reality is that financial incentives can make a huge difference in healthcare outcomes, from both a patient and a provider perspective.
Sanghavi’s team did an experiment in which they created outcome benchmarks for three major health problems that exist across geographically diverse areas: smoking, adolescent pregnancy, and binge drinking. The team proposed measuring the baseline of these issues then creating what they called a social impact bond. If communities were able to lower their frequency of these conditions by a given percent within a stated period of time, they’d get paid for it.
“What that did was essentially say, ‘you have a buyer for this outcome if you can achieve it,’” Sanghavi said. “And you can try to get there in any way you like.” The program is currently in CMS clearance.
AI and Robots Not Required
Using robots to perform surgery and artificial intelligence to diagnose disease will undoubtedly benefit doctors and patients around the US and the world. But Sanghavi’s talk made it clear that our healthcare system needs much more than this, and that improving population health on a large scale is really a low-tech project—one involving more regulatory and financial innovation than technological innovation.
“The things that get measured are the things that get changed,” he said. “If we choose the right outcomes to predict long-term benefit, and we pay for those outcomes, that’s the way to make progress.”
Image Credit: Wonderful Nature / Shutterstock.com Continue reading
#431238 AI Is Easy to Fool—Why That Needs to ...
Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.
“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”
What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.
“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”
Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading
#431142 Will Privacy Survive the Future?
Technological progress has radically transformed our concept of privacy. How we share information and display our identities has changed as we’ve migrated to the digital world.
As the Guardian states, “We now carry with us everywhere devices that give us access to all the world’s information, but they can also offer almost all the world vast quantities of information about us.” We are all leaving digital footprints as we navigate through the internet. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including governments, corporations, marketers, and criminals.
The ethical debate around privacy is complex. The reality is that our definition and standards for privacy have evolved over time, and will continue to do so in the next few decades.
Implications of Emerging Technologies
Protecting privacy will only become more challenging as we experience the emergence of technologies such as virtual reality, the Internet of Things, brain-machine interfaces, and much more.
Virtual reality headsets are already gathering information about users’ locations and physical movements. In the future all of our emotional experiences, reactions, and interactions in the virtual world will be able to be accessed and analyzed. As virtual reality becomes more immersive and indistinguishable from physical reality, technology companies will be able to gather an unprecedented amount of data.
It doesn’t end there. The Internet of Things will be able to gather live data from our homes, cities and institutions. Drones may be able to spy on us as we live our everyday lives. As the amount of genetic data gathered increases, the privacy of our genes, too, may be compromised.
It gets even more concerning when we look farther into the future. As companies like Neuralink attempt to merge the human brain with machines, we are left with powerful implications for privacy. Brain-machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface.
Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking,” which we must take extreme precaution against. However, it is important to note that lesser versions of these risks currently exist, i.e., by phone hacking, identify fraud, and the like.
A New Much-Needed Definition of Privacy
In many ways we are already cyborgs interfacing with technology. According to theories like the extended mind hypothesis, our technological devices are an extension of our identities. We use our phones to store memories, retrieve information, and communicate. We use powerful tools like the Hubble Telescope to extend our sense of sight. In parallel, one can argue that the digital world has become an extension of the physical world.
These technological tools are a part of who we are. This has led to many ethical and societal implications. Our Facebook profiles can be processed to infer secondary information about us, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality. Some argue that many of our devices may be mapping our every move. Your browsing history could be spied on and even sold in the open market.
While the argument to protect privacy and individuals’ information is valid to a certain extent, we may also have to accept the possibility that privacy will become obsolete in the future. We have inherently become more open as a society in the digital world, voluntarily sharing our identities, interests, views, and personalities.
“The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental?”
There also seems to be a contradiction with the positive trend towards mass transparency and the need to protect privacy. Many advocate for a massive decentralization and openness of information through mechanisms like blockchain.
The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental? We want to live in a world of fewer secrets, but also don’t want to live in a world where our every move is followed (not to mention our every feeling, thought and interaction). So, how do we find a balance?
Traditionally, privacy is used synonymously with secrecy. Many are led to believe that if you keep your personal information secret, then you’ve accomplished privacy. Danny Weitzner, director of the MIT Internet Policy Research Initiative, rejects this notion and argues that this old definition of privacy is dead.
From Witzner’s perspective, protecting privacy in the digital age means creating rules that require governments and businesses to be transparent about how they use our information. In other terms, we can’t bring the business of data to an end, but we can do a better job of controlling it. If these stakeholders spy on our personal information, then we should have the right to spy on how they spy on us.
The Role of Policy and Discourse
Almost always, policy has been too slow to adapt to the societal and ethical implications of technological progress. And sometimes the wrong laws can do more harm than good. For instance, in March, the US House of Representatives voted to allow internet service providers to sell your web browsing history on the open market.
More often than not, the bureaucratic nature of governance can’t keep up with exponential growth. New technologies are emerging every day and transforming society. Can we confidently claim that our world leaders, politicians, and local representatives are having these conversations and debates? Are they putting a focus on the ethical and societal implications of emerging technologies? Probably not.
We also can’t underestimate the role of public awareness and digital activism. There needs to be an emphasis on educating and engaging the general public about the complexities of these issues and the potential solutions available. The current solution may not be robust or clear, but having these discussions will get us there.
Stock Media provided by blasbike / Pond5 Continue reading