Tag Archives: future
#434648 The Pediatric AI That Outperformed ...
Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.
Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.
The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.
The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.
Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.
Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.
Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.
When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.
Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.
That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.
Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.
Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.
In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.
Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.
In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.
Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.
Image Credit: Nadia Snopek / Shutterstock.com Continue reading
#434623 The Great Myth of the AI Skills Gap
One of the most contentious debates in technology is around the question of automation and jobs. At issue is whether advances in automation, specifically with regards to artificial intelligence and robotics, will spell trouble for today’s workers. This debate is played out in the media daily, and passions run deep on both sides of the issue. In the past, however, automation has created jobs and increased real wages.
A widespread concern with the current scenario is that the workers most likely to be displaced by technology lack the skills needed to do the new jobs that same technology will create.
Let’s look at this concern in detail. Those who fear automation will hurt workers start by pointing out that there is a wide range of jobs, from low-pay, low-skill to high-pay, high-skill ones. This can be represented as follows:
They then point out that technology primarily creates high-paying jobs, like geneticists, as shown in the diagram below.
Meanwhile, technology destroys low-wage, low-skill jobs like those in fast food restaurants, as shown below:
Then, those who are worried about this dynamic often pose the question, “Do you really think a fast-food worker is going to become a geneticist?”
They worry that we are about to face a huge amount of systemic permanent unemployment, as the unskilled displaced workers are ill-equipped to do the jobs of tomorrow.
It is important to note that both sides of the debate are in agreement at this point. Unquestionably, technology destroys low-skilled, low-paying jobs while creating high-skilled, high-paying ones.
So, is that the end of the story? As a society are we destined to bifurcate into two groups, those who have training and earn high salaries in the new jobs, and those with less training who see their jobs vanishing to machines? Is this latter group forever locked out of economic plenty because they lack training?
No.
The question, “Can a fast food worker become a geneticist?” is where the error comes in. Fast food workers don’t become geneticists. What happens is that a college biology professor becomes a geneticist. Then a high-school biology teacher gets the college job. Then the substitute teacher gets hired on full-time to fill the high school teaching job. All the way down.
The question is not whether those in the lowest-skilled jobs can do the high-skilled work. Instead the question is, “Can everyone do a job just a little harder than the job they have today?” If so, and I believe very deeply that this is the case, then every time technology creates a new job “at the top,” everyone gets a promotion.
This isn’t just an academic theory—it’s 200 years of economic history in the west. For 200 years, with the exception of the Great Depression, unemployment in the US has been between 2 percent and 13 percent. Always. Europe’s range is a bit wider, but not much.
If I took 200 years of unemployment rates and graphed them, and asked you to find where the assembly line took over manufacturing, or where steam power rapidly replaced animal power, or the lightning-fast adoption of electricity by industry, you wouldn’t be able to find those spots. They aren’t even blips in the unemployment record.
You don’t even have to look back as far as the assembly line to see this happening. It has happened non-stop for 200 years. Every fifty years, we lose about half of all jobs, and this has been pretty steady since 1800.
How is it that for 200 years we have lost half of all jobs every half century, but never has this process caused unemployment? Not only has it not caused unemployment, but during that time, we have had full employment against the backdrop of rising wages.
How can wages rise while half of all jobs are constantly being destroyed? Simple. Because new technology always increases worker productivity. It creates new jobs, like web designer and programmer, while destroying low-wage backbreaking work. When this happens, everyone along the way gets a better job.
Our current situation isn’t any different than the past. The nature of technology has always been to create high-skilled jobs and increase worker productivity. This is good news for everyone.
People often ask me what their children should study to make sure they have a job in the future. I usually say it doesn’t really matter. If I knew everything I know now and went back to the mid 1980s, what could I have taken in high school to make me better prepared for today? There is only one class, and it wasn’t computer science. It was typing. Who would have guessed?
The great skill is to be able to learn new things, and luckily, we all have that. In fact, that is our singular ability as a species. What I do in my day-to-day job consists largely of skills I have learned as the years have passed. In my experience, if you ask people at all job levels,“Would you like a little more challenging job to make a little more money?” almost everyone says yes.
That’s all it has taken for us to collectively get here today, and that’s all we need going forward.
Image Credit: Lightspring / Shutterstock.com Continue reading
#434599 This AI Can Tell Your Age by Analyzing ...
The plethora of bacteria and other tiny organisms that live in your gut, often referred to as the gut microbiome, don’t just help you digest food and fight disease. As detailed in a new study, they also provide a very accurate biological clock that shows your physical age—a fact that may open up wide-ranging possibilities for health and longevity studies.
Combining Machine Learning and Your Gut
The link between the gut biome and age is described by longevity researcher Alex Zhavoronkov and a team of his colleagues at Insilico Medicine, an artificial intelligence startup focused on drug discovery, biomarker development, and aging research.
Relatively little is known about how our gut biomes transition from one stage to another as we age, or about links between our age and the state of our gut biomes. In their paper, which is awaiting peer review but can be found on the preprint server bioRxiv, the team describes how they examined 3,663 curated samples of gut bacteria from 1,165 healthy people, aged 20-90, from countries in Europe, Asia, and North America. Roughly a third of samples came from the 20-39 age group, a third from individuals between 40-59, and a third from people between 60-90 years old.
A deep learning algorithm was then trained on data on 1,673 different microbial species from 90 percent of the samples. The AI was then tasked with predicting the ages of the remaining 10 percent of participants solely from data on their gut bacteria.
The Accurate Bacterial Clock
The results, described as the first method to predict a human’s chronological age via gut microbiota analysis, showed that the system was able to predict age to within four years based on the gut bacteria data. Furthermore, the results seem to indicate that 39 of the microbial species analyzed are particularly important in relation to accurately predicting age.
The study also showed that our gut microbiomes change over time. While some microbes’ numbers dwindle as we age, others seem to become more abundant. Age is not the only factor that influences the prevalence of different types of bacteria in a person’s digestive system. What you eat, how you sleep, and how physically active you are are all thought to be contributing factors.
Science Magquotes Zhavoronkov as stating that the study could lay the foundation for a “microbiome aging clock” that could serve as a baseline in future research on how a person’s gut ages and how medicine, diet, and alcohol consumption affect longevity.
Living Longer, Better
Studies of our microbiome’s influence on longevity add another dimension to our understanding of how and why we age. Other avenues of study include looking at the length of telomeres, the tips of chromosomes that are believed to play an important role in the aging process, and our DNA.
The same can be said of the role microbiomes play in relation to illnesses and conditions including allergies, diabetes, some types of cancer, and psychological states such as depression. Scientists at Harvard are even developing genetically engineered ‘telephone’ bacteria that would be able to gather precise information about the state of the gut microbiome.
A positive side effect of many of the studies is that alongside dedicated microbiome data collection efforts, they add new data—the food of AI. While we are already gaining a better understanding of the gut biome, it is not a large leap of logic to predict that AI will feast on the new data and assist us in getting an even keener understanding of what is going on in our gut and what it means for our health.
Image Credit: GiroScience / Shutterstock.com Continue reading