Tag Archives: predicts
#431155 What It Will Take for Quantum Computers ...
Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading
#430868 These 7 Forces Are Changing the World at ...
It was the Greek philosopher Heraclitus who first said, “The only thing that is constant is change.”
He was onto something. But even he would likely be left speechless at the scale and pace of change the world has experienced in the past 100 years—not to mention the past 10.
Since 1917, the global population has gone from 1.9 billion people to 7.5 billion. Life expectancy has more than doubled in many developing countries and risen significantly in developed countries. In 1917 only eight percent of homes had phones—in the form of landline telephones—while today more than seven in 10 Americans own a smartphone—aka, a supercomputer that fits in their pockets.
And things aren’t going to slow down anytime soon. In a talk at Singularity University’s Global Summit this week in San Francisco, SU cofounder and chairman Peter Diamandis told the audience, “Tomorrow’s speed of change will make today look like we’re crawling.” He then shared his point of view about some of the most important factors driving this accelerating change.
Peter Diamandis at Singularity University’s Global Summit in San Francisco.
Computation
In 1965, Gordon Moore (cofounder of Intel) predicted computer chips would double in power and halve in cost every 18 to 24 months. What became known as Moore’s Law turned out to be accurate, and today affordable computer chips contain a billion or more transistors spaced just nanometers apart.
That means computers can do exponentially more calculations per second than they could thirty, twenty, or ten years ago—and at a dramatically lower cost. This in turn means we can generate a lot more information, and use computers for all kinds of applications they wouldn’t have been able to handle in the past (like diagnosing rare forms of cancer, for example).
Convergence
Increased computing power is the basis for a myriad of technological advances, which themselves are converging in ways we couldn’t have imagined a couple decades ago. As new technologies advance, the interactions between various subsets of those technologies create new opportunities that accelerate the pace of change much more than any single technology can on its own.
A breakthrough in biotechnology, for example, might spring from a crucial development in artificial intelligence. An advance in solar energy could come about by applying concepts from nanotechnology.
Interface Moments
Technology is becoming more accessible even to the most non-techy among us. The internet was once the domain of scientists and coders, but these days anyone can make their own web page, and browsers make those pages easily searchable. Now, interfaces are opening up areas like robotics or 3D printing.
As Diamandis put it, “You don’t need to know how to code to 3D print an attachment for your phone. We’re going from mind to materialization, from intentionality to implication.”
Artificial intelligence is what Diamandis calls “the ultimate interface moment,” enabling everyone who can speak their mind to connect and leverage exponential technologies.
Connectivity
Today there are about three billion people around the world connected to the internet—that’s up from 1.8 billion in 2010. But projections show that by 2025 there will be eight billion people connected. This is thanks to a race between tech billionaires to wrap the Earth in internet; Elon Musk’s SpaceX has plans to launch a network of 4,425 satellites to get the job done, while Google’s Project Loon is using giant polyethylene balloons for the task.
These projects will enable five billion new minds to come online, and those minds will have access to exponential technologies via interface moments.
Sensors
Diamandis predicts that after we establish a 5G network with speeds of 10–100 Gbps, a proliferation of sensors will follow, to the point that there’ll be around 100,000 sensors per city block. These sensors will be equipped with the most advanced AI, and the combination of these two will yield an incredible amount of knowledge.
“By 2030 we’re heading towards 100 trillion sensors,” Diamandis said. “We’re heading towards a world in which we’re going to be able to know anything we want, anywhere we want, anytime we want.” He added that tens of thousands of drones will hover over every major city.
Intelligence
“If you think there’s an arms race going on for AI, there’s also one for HI—human intelligence,” Diamandis said. He explained that if a genius was born in a remote village 100 years ago, he or she would likely not have been able to gain access to the resources needed to put his or her gifts to widely productive use. But that’s about to change.
Private companies as well as military programs are working on brain-machine interfaces, with the ultimate aim of uploading the human mind. The focus in the future will be on increasing intelligence of individuals as well as companies and even countries.
Wealth Concentration
A final crucial factor driving mass acceleration is the increase in wealth concentration. “We’re living in a time when there’s more wealth in the hands of private individuals, and they’re willing to take bigger risks than ever before,” Diamandis said. Billionaires like Mark Zuckerberg, Jeff Bezos, Elon Musk, and Bill Gates are putting millions of dollars towards philanthropic causes that will benefit not only themselves, but humanity at large.
What It All Means
One of the biggest implications of the rate at which the world is changing, Diamandis said, is that the cost of everything is trending towards zero. We are heading towards abundance, and the evidence lies in the reduction of extreme poverty we’ve already seen and will continue to see at an even more rapid rate.
Listening to Diamandis’ optimism, it’s hard not to find it contagious.
“The world is becoming better at an extraordinary rate,” he said, pointing out the rises in literacy, democracy, vaccinations, and life expectancy, and the concurrent decreases in child mortality, birth rate, and poverty.
“We’re alive during a pivotal time in human history,” he concluded. “There is nothing we don’t have access to.”
Stock Media provided by seanpavonephoto / Pond5 Continue reading
#430286 Artificial Intelligence Predicts Death ...
Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light.
Welsh poet Dylan Thomas’ famous lines are a passionate plea to fight against the inevitability of death. While the sentiment is poetic, the reality is far more prosaic. We are all going to die someday at a time and place that will likely remain a mystery to us until the very end.
Or maybe not.
Researchers are now applying artificial intelligence, particularly machine learning and computer vision, to predict when someone may die. The ultimate goal is not to play the role of Grim Reaper, like in the macabre sci-fi Machine of Death universe, but to treat or even prevent chronic diseases and other illnesses.
The latest research into this application of AI to precision medicine used an off-the-shelf machine-learning platform to analyze 48 chest CT scans. The computer was able to predict which patients would die within five years with 69 percent accuracy. That’s about as good as any human doctor.
The results were published in the Nature journal Scientific Reports by a team led by the University of Adelaide.
In an email interview with Singularity Hub, lead author Dr. Luke Oakden-Rayner, a radiologist and PhD student, says that one of the obvious benefits of using AI in precision medicine is to identify health risks earlier and potentially intervene.
Less obvious, he adds, is the promise of speeding up longevity research.
“Currently, most research into chronic disease and longevity requires long periods of follow-up to detect any difference between patients with and without treatment, because the diseases progress so slowly,” he explains. “If we can quantify the changes earlier, not only can we identify disease while we can intervene more effectively, but we might also be able to detect treatment response much sooner.”
That could lead to faster and cheaper treatments, he adds. “If we could cut a year or two off the time it takes to take a treatment from lab to patient, that could speed up progress in this area substantially.”
AI has a heart
In January, researchers at Imperial College London published results that suggested AI could predict heart failure and death better than a human doctor. The research, published in the journal Radiology, involved creating virtual 3D hearts of about 250 patients that could simulate cardiac function. AI algorithms then went to work to learn what features would serve as the best predictors. The system relied on MRIs, blood tests, and other data for its analyses.
In the end, the machine was faster and better at assessing risk of pulmonary hypertension—about 73 percent versus 60 percent.
The researchers say the technology could be applied to predict outcomes of other heart conditions in the future. “We would like to develop the technology so it can be used in many heart conditions to complement how doctors interpret the results of medical tests,” says study co-author Dr. Tim Dawes in a press release. “The goal is to see if better predictions can guide treatment to help people to live longer.”
AI getting smarter
These sorts of applications with AI to precision medicine are only going to get better as the machines continue to learn, just like any medical school student.
Oakden-Rayner says his team is still building its ideal dataset as it moves forward with its research, but have already improved predictive accuracy by 75 to 80 percent by including information such as age and sex.
“I think there is an upper limit on how accurate we can be, because there is always going to be an element of randomness,” he says, replying to how well AI will be able to pinpoint individual human mortality. “But we can be much more precise than we are now, taking more of each individual’s risks and strengths into account. A model combining all of those factors will hopefully account for more than 80 percent of the risk of near-term mortality.”
Others are even more optimistic about how quickly AI will transform this aspect of the medical field.
“Predicting remaining life span for people is actually one of the easiest applications of machine learning,” Dr. Ziad Obermeyer tells STAT News. “It requires a unique set of data where we have electronic records linked to information about when people died. But once we have that for enough people, you can come up with a very accurate predictor of someone’s likelihood of being alive one month out, for instance, or one year out.”
Obermeyer co-authored a paper last year with Dr. Ezekiel Emanuel in the New England Journal of Medicine called “Predicting the Future—Big Data, Machine Learning, and Clinical Medicine.”
AI still has much to learn
Experts like Obermeyer and Oakden-Rayner agree that advances will come swiftly, but there is still much work to be done.
For one thing, there’s plenty of data out there to mine, but it’s still a bit of a mess. For example, the images needed to train machines still need to be processed to make them useful. “Many groups around the world are now spending millions of dollars on this task, because this appears to be the major bottleneck for successful medical AI,” Oakden-Rayner says.
In the interview with STAT News, Obermeyer says data is fragmented across the health system, so linking information and creating comprehensive datasets will take time and money. He also notes that while there is much excitement about the use of AI in precision medicine, there’s been little activity in testing the algorithms in a clinical setting.
“It’s all very well and good to say you’ve got an algorithm that’s good at predicting. Now let’s actually port them over to the real world in a safe and responsible and ethical way and see what happens,” he says in STAT News.
AI is no accident
Preventing a fatal disease is one thing. But preventing fatal accidents with AI?
That’s what US and Indian researchers set out to do when they looked over the disturbing number of deaths occurring from people taking selfies. The team identified 127 people who died while posing for a self-taken photo over a two-year period.
Based on a combination of text, images and location, the machine learned to identify a selfie as potentially dangerous or not. Running more than 3,000 annotated selfies collected on Twitter through the software resulted in 73 percent accuracy.
“The combination of image-based and location-based features resulted in the best accuracy,” they reported.
What’s next? A sort of selfie early warning system. “One of the directions that we are working on is to have the camera give the user information about [whether or not a particular location is] dangerous, with some score attached to it,” says Ponnurangam Kumaraguru, a professor at Indraprastha Institute of Information Technology in Delhi, in a story by Digital Trends.
AI and the future
This discussion begs the question: Do we really want to know when we’re going to die?
According to at least one paper published in Psychology Review earlier this year, the answer is a resounding “no.” Nearly nine out of 10 people in Germany and Spain who were quizzed about whether they would want to know about their future, including death, said they would prefer to remain ignorant.
Obermeyer sees it differently, at least when it comes to people living with life-threatening illness.
“[O]ne thing that those patients really, really want and aren’t getting from doctors is objective predictions about how long they have to live,” he tells Marketplace public radio. “Doctors are very reluctant to answer those kinds of questions, partly because, you know, you don’t want to be wrong about something so important. But also partly because there’s a sense that patients don’t want to know. And in fact, that turns out not to be true when you actually ask the patients.”
Stock Media provided by photocosma / Pond5 Continue reading