Tag Archives: interface
Big data, personalized medicine, artificial intelligence. String these three buzzphrases together, and what do you have?
A system that may revolutionize the future of healthcare, by bringing sophisticated health data directly to patients for them to ponder, digest, and act upon—and potentially stop diseases in their tracks.
At Singularity University’s Exponential Medicine conference in San Diego this week, Dr. Ran Balicer, director of the Clalit Research Institute in Israel, painted a futuristic picture of how big data can merge with personalized healthcare into an app-based system in which the patient is in control.
Dr. Ran Balicer at Exponential Medicine
Picture this: instead of going to a physician with your ailments, your doctor calls you with some bad news: “Within six hours, you’re going to have a heart attack. So why don’t you come into the clinic and we can fix that.” Crisis averted.
Following the treatment, you’re at home monitoring your biomarkers, lab test results, and other health information through an app with a clean, beautiful user interface. Within the app, you can observe how various health-influencing life habits—smoking, drinking, insufficient sleep—influence your chance of future cardiovascular disease risks by toggling their levels up or down.
There’s more: you can also set a health goal within the app—for example, stop smoking—which automatically informs your physician. The app will then suggest pharmaceuticals to help you ditch the nicotine and automatically sends the prescription to your local drug store. You’ll also immediately find a list of nearby support groups that can help you reach your health goal.
With this hefty dose of AI, you’re in charge of your health—in fact, probably more so than under current healthcare systems.
Sound fantastical? In fact, this type of preemptive care is already being provided in some countries, including Israel, at a massive scale, said Balicer. By mining datasets with deep learning and other powerful AI tools, we can predict the future—and put it into the hands of patients.
The Israeli Advantage
In order to apply big data approaches to medicine, you first need a giant database.
Israel is ahead of the game in this regard. With decades of electronic health records aggregated within a central warehouse, Israel offers a wealth of health-related data on the scale of millions of people and billions of data points. The data is incredibly multiplex, covering lab tests, drugs, hospital admissions, medical procedures, and more.
One of Balicer’s early successes was an algorithm that predicts diabetes, which allowed the team to notify physicians to target their care. Clalit has also been busy digging into data that predicts winter pneumonia, osteoporosis, and a long list of other preventable diseases.
So far, Balicer’s predictive health system has only been tested on a pilot group of patients, but he is expecting to roll out the platform to all patients in the database in the next few months.
Truly Personalized Medicine
To Balicer, whatever a machine can do better, it should be welcomed to do. AI diagnosticians have already enjoyed plenty of successes—but their collaboration remains mostly with physicians, at a point in time when the patient is already ill.
A particularly powerful use of AI in medicine is to bring insights and trends directly to the patient, such that they can take control over their own health and medical care.
For example, take the problem of tailored drug dosing. Current drug doses are based on average results conducted during clinical trials—the dosing is not tailored for any specific patient’s genetic and health makeup. But what if a doctor had already seen millions of other patients similar to your case, and could generate dosing recommendations more relevant to you based on that particular group of patients?
Such personalized recommendations are beyond the ability of any single human doctor. But with the help of AI, which can quickly process massive datasets to find similarities, doctors may soon be able to prescribe individually-tailored medications.
Tailored treatment doesn’t stop there. Another issue with pharmaceuticals and treatment regimes is that they often come with side effects: potentially health-threatening reactions that may, or may not, happen to you based on your biometrics.
Back in 2017, the New England Journal of Medicine launched the SPRINT Data Analysis Challenge, which urged physicians and data analysts to identify novel clinical findings using shared clinical trial data.
Working with Dr. Noa Dagan at the Clalit Research Institute, Balicer and team developed an algorithm that recommends whether or not a patient receives a particularly intensive treatment regime for hypertension.
Rather than simply looking at one outcome—normalized blood pressure—the algorithm takes into account an individual’s specific characteristics, laying out the treatment’s predicted benefits and harms for a particular patient.
“We built thousands of models for each patient to comprehensively understand the impact of the treatment for the individual; for example, a reduced risk for stroke and cardiovascular-related deaths could be accompanied by an increase in serious renal failure,” said Balicer. “This approach allows a truly personalized balance—allowing patients and their physicians to ultimately decide if the risks of the treatment are worth the benefits.”
This is already personalized medicine at its finest. But Balicer didn’t stop there.
We are not the sum of our biologics and medical stats, he said. A truly personalized approach needs to take a patient’s needs and goals and the sacrifices and tradeoffs they’re willing to make into account, rather than having the physician make decisions for them.
Balicer’s preventative system adds this layer of complexity by giving weights to different outcomes based on patients’ input of their own health goals. Rather than blindly following big data, the system holistically integrates the patient’s opinion to make recommendations.
Balicer’s system is just one example of how AI can truly transform personalized health care. The next big challenge is to work with physicians to further optimize these systems, in a way that doctors can easily integrate them into their workflow and embrace the technology.
“Health systems will not be replaced by algorithms, rest assured,” concluded Balicer, “but health systems that don’t use algorithms will be replaced by those that do.”
Image Credit: Magic mine / Shutterstock.com Continue reading
If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.
But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.
From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.
They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.
The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.
Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.
“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.
The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.
That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”
Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.
BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’
However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.
Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.
The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.
One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.
For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.
For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.
Image Credit: Ondrej Prosicky/Shutterstock Continue reading