Tag Archives: term
#431238 AI Is Easy to Fool—Why That Needs to ...
Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.
“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”
What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.
“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”
Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading
#431155 What It Will Take for Quantum Computers ...
Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading
#430855 Why Education Is the Hardest Sector of ...
We’ve all heard the warning cries: automation will disrupt entire industries and put millions of people out of jobs. In fact, up to 45 percent of existing jobs can be automated using current technology.
However, this may not necessarily apply to the education sector. After a detailed analysis of more than 2,000-plus work activities for more than 800 occupations, a report by McKinsey & Co states that of all the sectors examined, “…the technical feasibility of automation is lowest in education.”
There is no doubt that technological trends will have a powerful impact on global education, both by improving the overall learning experience and by increasing global access to education. Massive open online courses (MOOCs), chatbot tutors, and AI-powered lesson plans are just a few examples of the digital transformation in global education. But will robots and artificial intelligence ever fully replace teachers?
The Most Difficult Sector to Automate
While various tasks revolving around education—like administrative tasks or facilities maintenance—are open to automation, teaching itself is not.
Effective education involves more than just transfer of information from a teacher to a student. Good teaching requires complex social interactions and adaptation to the individual student’s learning needs. An effective teacher is not just responsive to each student’s strengths and weaknesses, but is also empathetic towards the student’s state of mind. It’s about maximizing human potential.
Furthermore, students don’t just rely on effective teachers to teach them the course material, but also as a source of life guidance and career mentorship. Deep and meaningful human interaction is crucial and is something that is very difficult, if not impossible, to automate.
Automating teaching is an example of a task that would require artificial general intelligence (as opposed to narrow or specific intelligence). In other words, this is the kind of task that would require an AI that understands natural human language, can be empathetic towards emotions, plan, strategize and make impactful decisions under unpredictable circumstances.
This would be the kind of machine that can do anything a human can do, and it doesn’t exist—at least, not yet.
We’re Getting There
Let’s not forget how quickly AI is evolving. Just because it’s difficult to fully automate teaching, it doesn’t mean the world’s leading AI experts aren’t trying.
Meet Jill Watson, the teaching assistant from Georgia Institute of Technology. Watson isn’t your average TA. She’s an IBM-powered artificial intelligence that is being implemented in universities around the world. Watson is able to answer students’ questions with 97 percent certainty.
Technologies like this also have applications in grading and providing feedback. Some AI algorithms are being trained and refined to perform automatic essay scoring. One project has achieved a 0.945 correlation with human graders.
All of this will have a remarkable impact on online education as we know it and dramatically increase online student retention rates.
Any student with a smartphone can access a wealth of information and free courses from universities around the world. MOOCs have allowed valuable courses to become available to millions of students. But at the moment, not all participants can receive customized feedback for their work. Currently, this is limited by manpower, but in the future that may not be the case.
What chatbots like Jill Watson allow is the opportunity for hundreds of thousands, if not millions, of students to have their work reviewed and all their questions answered at a minimal cost.
AI algorithms also have a significant role to play in personalization of education. Every student is unique and has a different set of strengths and weaknesses. Data analysis can be used to improve individual student results, assess each student’s strengths and weaknesses, and create mass-customized programs. Algorithms can analyze student data and consequently make flexible programs that adapt to the learner based on real-time feedback. According to the McKinsey Global Institute, all of this data in education could unlock between $900 billion and $1.2 trillion in global economic value.
Beyond Automated Teaching
It’s important to recognize that technological automation alone won’t fix the many issues in our global education system today. Dominated by outdated curricula, standardized tests, and an emphasis on short-term knowledge, many experts are calling for a transformation of how we teach.
It is not enough to simply automate the process. We can have a completely digital learning experience that continues to focus on outdated skills and fails to prepare students for the future. In other words, we must not only be innovative with our automation capabilities, but also with educational content, strategy, and policies.
Are we equipping students with the most important survival skills? Are we inspiring young minds to create a better future? Are we meeting the unique learning needs of each and every student? There’s no point automating and digitizing a system that is already flawed. We need to ensure the system that is being digitized is itself being transformed for the better.
Stock Media provided by davincidig / Pond5 Continue reading