Tag Archives: help
#433828 Using Big Data to Give Patients Control ...
Big data, personalized medicine, artificial intelligence. String these three buzzphrases together, and what do you have?
A system that may revolutionize the future of healthcare, by bringing sophisticated health data directly to patients for them to ponder, digest, and act upon—and potentially stop diseases in their tracks.
At Singularity University’s Exponential Medicine conference in San Diego this week, Dr. Ran Balicer, director of the Clalit Research Institute in Israel, painted a futuristic picture of how big data can merge with personalized healthcare into an app-based system in which the patient is in control.
Dr. Ran Balicer at Exponential Medicine
Picture this: instead of going to a physician with your ailments, your doctor calls you with some bad news: “Within six hours, you’re going to have a heart attack. So why don’t you come into the clinic and we can fix that.” Crisis averted.
Following the treatment, you’re at home monitoring your biomarkers, lab test results, and other health information through an app with a clean, beautiful user interface. Within the app, you can observe how various health-influencing life habits—smoking, drinking, insufficient sleep—influence your chance of future cardiovascular disease risks by toggling their levels up or down.
There’s more: you can also set a health goal within the app—for example, stop smoking—which automatically informs your physician. The app will then suggest pharmaceuticals to help you ditch the nicotine and automatically sends the prescription to your local drug store. You’ll also immediately find a list of nearby support groups that can help you reach your health goal.
With this hefty dose of AI, you’re in charge of your health—in fact, probably more so than under current healthcare systems.
Sound fantastical? In fact, this type of preemptive care is already being provided in some countries, including Israel, at a massive scale, said Balicer. By mining datasets with deep learning and other powerful AI tools, we can predict the future—and put it into the hands of patients.
The Israeli Advantage
In order to apply big data approaches to medicine, you first need a giant database.
Israel is ahead of the game in this regard. With decades of electronic health records aggregated within a central warehouse, Israel offers a wealth of health-related data on the scale of millions of people and billions of data points. The data is incredibly multiplex, covering lab tests, drugs, hospital admissions, medical procedures, and more.
One of Balicer’s early successes was an algorithm that predicts diabetes, which allowed the team to notify physicians to target their care. Clalit has also been busy digging into data that predicts winter pneumonia, osteoporosis, and a long list of other preventable diseases.
So far, Balicer’s predictive health system has only been tested on a pilot group of patients, but he is expecting to roll out the platform to all patients in the database in the next few months.
Truly Personalized Medicine
To Balicer, whatever a machine can do better, it should be welcomed to do. AI diagnosticians have already enjoyed plenty of successes—but their collaboration remains mostly with physicians, at a point in time when the patient is already ill.
A particularly powerful use of AI in medicine is to bring insights and trends directly to the patient, such that they can take control over their own health and medical care.
For example, take the problem of tailored drug dosing. Current drug doses are based on average results conducted during clinical trials—the dosing is not tailored for any specific patient’s genetic and health makeup. But what if a doctor had already seen millions of other patients similar to your case, and could generate dosing recommendations more relevant to you based on that particular group of patients?
Such personalized recommendations are beyond the ability of any single human doctor. But with the help of AI, which can quickly process massive datasets to find similarities, doctors may soon be able to prescribe individually-tailored medications.
Tailored treatment doesn’t stop there. Another issue with pharmaceuticals and treatment regimes is that they often come with side effects: potentially health-threatening reactions that may, or may not, happen to you based on your biometrics.
Back in 2017, the New England Journal of Medicine launched the SPRINT Data Analysis Challenge, which urged physicians and data analysts to identify novel clinical findings using shared clinical trial data.
Working with Dr. Noa Dagan at the Clalit Research Institute, Balicer and team developed an algorithm that recommends whether or not a patient receives a particularly intensive treatment regime for hypertension.
Rather than simply looking at one outcome—normalized blood pressure—the algorithm takes into account an individual’s specific characteristics, laying out the treatment’s predicted benefits and harms for a particular patient.
“We built thousands of models for each patient to comprehensively understand the impact of the treatment for the individual; for example, a reduced risk for stroke and cardiovascular-related deaths could be accompanied by an increase in serious renal failure,” said Balicer. “This approach allows a truly personalized balance—allowing patients and their physicians to ultimately decide if the risks of the treatment are worth the benefits.”
This is already personalized medicine at its finest. But Balicer didn’t stop there.
We are not the sum of our biologics and medical stats, he said. A truly personalized approach needs to take a patient’s needs and goals and the sacrifices and tradeoffs they’re willing to make into account, rather than having the physician make decisions for them.
Balicer’s preventative system adds this layer of complexity by giving weights to different outcomes based on patients’ input of their own health goals. Rather than blindly following big data, the system holistically integrates the patient’s opinion to make recommendations.
Balicer’s system is just one example of how AI can truly transform personalized health care. The next big challenge is to work with physicians to further optimize these systems, in a way that doctors can easily integrate them into their workflow and embrace the technology.
“Health systems will not be replaced by algorithms, rest assured,” concluded Balicer, “but health systems that don’t use algorithms will be replaced by those that do.”
Image Credit: Magic mine / Shutterstock.com Continue reading
#433807 The How, Why, and Whether of Custom ...
A digital afterlife may soon be within reach, but it might not be for your benefit.
The reams of data we’re creating could soon make it possible to create digital avatars that live on after we die, aimed at comforting our loved ones or sharing our experience with future generations.
That may seem like a disappointing downgrade from the vision promised by the more optimistic futurists, where we upload our consciousness to the cloud and live forever in machines. But it might be a realistic possibility in the not-too-distant future—and the first steps have already been taken.
After her friend died in a car crash, Eugenia Kuyda, co-founder of Russian AI startup Luka, trained a neural network-powered chatbot on their shared message history to mimic him. Journalist and amateur coder James Vlahos took a more involved approach, carrying out extensive interviews with his terminally ill father so that he could create a digital clone of him when he died.
For those of us without the time or expertise to build our own artificial intelligence-powered avatar, startup Eternime is offering to take your social media posts and interactions as well as basic personal information to build a copy of you that could then interact with relatives once you’re gone. The service is so far only running a private beta with a handful of people, but with 40,000 on its waiting list, it’s clear there’s a market.
Comforting—Or Creepy?
The whole idea may seem eerily similar to the Black Mirror episode Be Right Back, in which a woman pays a company to create a digital copy of her deceased husband and eventually a realistic robot replica. And given the show’s focus on the emotional turmoil she goes through, people might question whether the idea is a sensible one.
But it’s hard to say at this stage whether being able to interact with an approximation of a deceased loved one would be a help or a hindrance in the grieving process. The fear is that it could make it harder for people to “let go” or “move on,” but others think it could play a useful therapeutic role, reminding people that just because someone is dead it doesn’t mean they’re gone, and providing a novel way for them to express and come to terms with their feelings.
While at present most envisage these digital resurrections as a way to memorialize loved ones, there are also more ambitious plans to use the technology as a way to preserve expertise and experience. A project at MIT called Augmented Eternity is investigating whether we could use AI to trawl through someone’s digital footprints and extract both their knowledge and elements of their personality.
Project leader Hossein Rahnama says he’s already working with a CEO who wants to leave behind a digital avatar that future executives could consult with after he’s gone. And you wouldn’t necessarily have to wait until you’re dead—experts could create virtual clones of themselves that could dispense advice on demand to far more people. These clones could soon be more than simple chatbots, too. Hollywood has already started spending millions of dollars to create 3D scans of its most bankable stars so that they can keep acting beyond the grave.
It’s easy to see the appeal of the idea; imagine if we could bring back Stephen Hawking or Tim Cook to share their wisdom with us. And what if we could create a digital brain trust combining the experience and wisdom of all the world’s greatest thinkers, accessible on demand?
But there are still huge hurdles ahead before we could create truly accurate representations of people by simply trawling through their digital remains. The first problem is data. Most peoples’ digital footprints only started reaching significant proportions in the last decade or so, and cover a relatively small period of their lives. It could take many years before there’s enough data to create more than just a superficial imitation of someone.
And that’s assuming that the data we produce is truly representative of who we are. Carefully-crafted Instagram profiles and cautiously-worded work emails hardly capture the messy realities of most peoples’ lives.
Perhaps if the idea is simply to create a bank of someone’s knowledge and expertise, accurately capturing the essence of their character would be less important. But these clones would also be static. Real people continually learn and change, but a digital avatar is a snapshot of someone’s character and opinions at the point they died. An inability to adapt as the world around them changes could put a shelf life on the usefulness of these replicas.
Who’s Calling the (Digital) Shots?
It won’t stop people trying, though, and that raises a potentially more important question: Who gets to make the calls about our digital afterlife? The subjects, their families, or the companies that hold their data?
In most countries, the law is currently pretty hazy on this topic. Companies like Google and Facebook have processes to let you choose who should take control of your accounts in the event of your death. But if you’ve forgotten to do that, the fate of your virtual remains comes down to a tangle of federal law, local law, and tech company terms of service.
This lack of regulation could create incentives and opportunities for unscrupulous behavior. The voice of a deceased loved one could be a highly persuasive tool for exploitation, and digital replicas of respected experts could be powerful means of pushing a hidden agenda.
That means there’s a pressing need for clear and unambiguous rules. Researchers at Oxford University recently suggested ethical guidelines that would treat our digital remains the same way museums and archaeologists are required to treat mortal remains—with dignity and in the interest of society.
Whether those kinds of guidelines are ever enshrined in law remains to be seen, but ultimately they may decide whether the digital afterlife turns out to be heaven or hell.
Image Credit: frankie’s / Shutterstock.com Continue reading
#433785 DeepMind’s Eerie Reimagination of the ...
If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.
But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.
From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.
They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.
The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.
Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.
“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.
The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.
That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”
Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.
BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’
However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.
Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.
The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.
One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.
For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.
For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.
Image Credit: Ondrej Prosicky/Shutterstock Continue reading
#433776 Why We Should Stop Conflating Human and ...
It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.
Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.
It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.
Neural Networks
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.
But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).
This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).
Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.
The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.
Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.
In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.
Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.
Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.
Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.
Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.
But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.
Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.
Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.
AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.
Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.
Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.
Image Credit: Liu zishan/Shutterstock Continue reading