Tag Archives: discovery

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#434865 5 AI Breakthroughs We’ll Likely See in ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

As AI algorithms such as Siri and Alexa can process your voice and output helpful responses, other AIs like Face++ can recognize faces. And yet others create art from scribbles, or even diagnose medical conditions.

Let’s dive into AI and convergence.

Top 5 Predictions for AI Breakthroughs (2019-2024)
My friend Neil Jacobstein is my ‘go-to expert’ in AI, with over 25 years of technical consulting experience in the field. Currently the AI and Robotics chair at Singularity University, Jacobstein is also a Distinguished Visiting Scholar in Stanford’s MediaX Program, a Henry Crown Fellow, an Aspen Institute moderator, and serves on the National Academy of Sciences Earth and Life Studies Committee. Neil predicted five trends he expects to emerge over the next five years, by 2024.

AI gives rise to new non-human pattern recognition and intelligence results

AlphaGo Zero, a machine learning computer program trained to play the complex game of Go, defeated the Go world champion in 2016 by 100 games to zero. But instead of learning from human play, AlphaGo Zero trained by playing against itself—a method known as reinforcement learning.

Building its own knowledge from scratch, AlphaGo Zero demonstrates a novel form of creativity, free of human bias. Even more groundbreaking, this type of AI pattern recognition allows machines to accumulate thousands of years of knowledge in a matter of hours.

While these systems can’t answer the question “What is orange juice?” or compete with the intelligence of a fifth grader, they are growing more and more strategically complex, merging with other forms of narrow artificial intelligence. Within the next five years, who knows what successors of AlphaGo Zero will emerge, augmenting both your business functions and day-to-day life.

Doctors risk malpractice when not using machine learning for diagnosis and treatment planning

A group of Chinese and American researchers recently created an AI system that diagnoses common childhood illnesses, ranging from the flu to meningitis. Trained on electronic health records compiled from 1.3 million outpatient visits of almost 600,000 patients, the AI program produced diagnosis outcomes with unprecedented accuracy.

While the US health system does not tout the same level of accessible universal health data as some Chinese systems, we’ve made progress in implementing AI in medical diagnosis. Dr. Kang Zhang, chief of ophthalmic genetics at the University of California, San Diego, created his own system that detects signs of diabetic blindness, relying on both text and medical images.

With an eye to the future, Jacobstein has predicted that “we will soon see an inflection point where doctors will feel it’s a risk to not use machine learning and AI in their everyday practices because they don’t want to be called out for missing an important diagnostic signal.”

Quantum advantage will massively accelerate drug design and testing

Researchers estimate that there are 1060 possible drug-like molecules—more than the number of atoms in our solar system. But today, chemists must make drug predictions based on properties influenced by molecular structure, then synthesize numerous variants to test their hypotheses.

Quantum computing could transform this time-consuming, highly costly process into an efficient, not to mention life-changing, drug discovery protocol.

“Quantum computing is going to have a major industrial impact… not by breaking encryption,” said Jacobstein, “but by making inroads into design through massive parallel processing that can exploit superposition and quantum interference and entanglement, and that can wildly outperform classical computing.”

AI accelerates security systems’ vulnerability and defense

With the incorporation of AI into almost every aspect of our lives, cyberattacks have grown increasingly threatening. “Deep attacks” can use AI-generated content to avoid both human and AI controls.

Previous examples include fake videos of former President Obama speaking fabricated sentences, and an adversarial AI fooling another algorithm into categorizing a stop sign as a 45 mph speed limit sign. Without the appropriate protections, AI systems can be manipulated to conduct any number of destructive objectives, whether ruining reputations or diverting autonomous vehicles.

Jacobstein’s take: “We all have security systems on our buildings, in our homes, around the healthcare system, and in air traffic control, financial organizations, the military, and intelligence communities. But we all know that these systems have been hacked periodically and we’re going to see that accelerate. So, there are major business opportunities there and there are major opportunities for you to get ahead of that curve before it bites you.”

AI design systems drive breakthroughs in atomically precise manufacturing

Just as the modern computer transformed our relationship with bits and information, AI will redefine and revolutionize our relationship with molecules and materials. AI is currently being used to discover new materials for clean-tech innovations, such as solar panels, batteries, and devices that can now conduct artificial photosynthesis.

Today, it takes about 15 to 20 years to create a single new material, according to industry experts. But as AI design systems skyrocket in capacity, these will vastly accelerate the materials discovery process, allowing us to address pressing issues like climate change at record rates. Companies like Kebotix are already on their way to streamlining the creation of chemistries and materials at the click of a button.

Atomically precise manufacturing will enable us to produce the previously unimaginable.

Final Thoughts
Within just the past three years, countries across the globe have signed into existence national AI strategies and plans for ramping up innovation. Businesses and think tanks have leaped onto the scene, hiring AI engineers and tech consultants to leverage what computer scientist Andrew Ng has even called the new ‘electricity’ of the 21st century.

As AI plays an exceedingly vital role in everyday life, how will your business leverage it to keep up and build forward?

In the wake of burgeoning markets, new ventures will quickly arise, each taking advantage of untapped data sources or unmet security needs.

And as your company aims to ride the wave of AI’s exponential growth, consider the following pointers to leverage AI and disrupt yourself before it reaches you first:

Determine where and how you can begin collecting critical data to inform your AI algorithms
Identify time-intensive processes that can be automated and accelerated within your company
Discern which global challenges can be expedited by hyper-fast, all-knowing minds

Remember: good data is vital fuel. Well-defined problems are the best compass. And the time to start implementing AI is now.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#434837 In Defense of Black Box AI

Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?

In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.

Edd Gent: What’s your experience with black box algorithms?

Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.

I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.

Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.

EG: What made you feel like you had to mount a defense of these black box algorithms?

EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.

It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.

It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.

EG: In what situations do you think we should be using black box algorithms?

EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.

There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.

But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.

The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.

What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.

EG: Do you think there’s been too much emphasis on interpretability?

EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.

I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.

Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.

EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?

EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.

With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.

So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.

*Responses have been edited for length and style

Image Credit: Chingraph / Shutterstock.com Continue reading

Posted in Human Robots

#434637 AI Is Rapidly Augmenting Healthcare and ...

When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.

Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.

During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.

The pace of AI-augmented healthcare innovation is only accelerating.

In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.

In this blog, I’ll expand on:

Machine learning and drug design
Artificial intelligence and big data in medicine
Healthcare, AI & China

Let’s dive in.

Machine Learning in Drug Design
What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?

Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.

It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.

This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.

One of the hottest startups in digital drug discovery today is Insilico Medicine. Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.

Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets. These molecules either already exist or can be generated de novo with the desired set of parameters.

In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system—an impressive feat made possible by converging exponential technologies.

Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.

Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.

Dr. Zhavoronkov’s ultimate goal is to develop a fully-automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.

Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.

Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.

Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.

As I discussed in my blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.

In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.

And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Artificial Intelligence and Data Crunching
AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives. Take WAVE, for instance. Every year, over 400,000 patients die prematurely in US hospitals as a result of heart attack or respiratory failure.

Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.

Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.

Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.

Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.

For some time, it has become physically impossible for a human physician to read—let alone remember—all of the relevant published data.

To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.

Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.

One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.

After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.

But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, and pathology and radiology reports.

In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnoses, treatments, dosages, symptoms and signs.

Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the Wall Street Journal that internal tests demonstrated that the software even performs as well as or better than other published efforts.

On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”

Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.

Healthcare, AI & China
In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.

Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.

Backed by the Chinese government, China’s largest tech companies—particularly Tencent—have now made strong entrances into healthcare.

Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.

Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous US personalized medicine startups.

Considering Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.

China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform. At the same time, 2,000 Chinese hospitals accept WeChat payments.

Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.

Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.

Conclusion
As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.

Next week, I’ll continue to explore this concept of AI systems in healthcare.

Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.

As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your moonshots and live out your massively transformative purpose?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots

#434599 This AI Can Tell Your Age by Analyzing ...

The plethora of bacteria and other tiny organisms that live in your gut, often referred to as the gut microbiome, don’t just help you digest food and fight disease. As detailed in a new study, they also provide a very accurate biological clock that shows your physical age—a fact that may open up wide-ranging possibilities for health and longevity studies.

Combining Machine Learning and Your Gut
The link between the gut biome and age is described by longevity researcher Alex Zhavoronkov and a team of his colleagues at Insilico Medicine, an artificial intelligence startup focused on drug discovery, biomarker development, and aging research.

Relatively little is known about how our gut biomes transition from one stage to another as we age, or about links between our age and the state of our gut biomes. In their paper, which is awaiting peer review but can be found on the preprint server bioRxiv, the team describes how they examined 3,663 curated samples of gut bacteria from 1,165 healthy people, aged 20-90, from countries in Europe, Asia, and North America. Roughly a third of samples came from the 20-39 age group, a third from individuals between 40-59, and a third from people between 60-90 years old.

A deep learning algorithm was then trained on data on 1,673 different microbial species from 90 percent of the samples. The AI was then tasked with predicting the ages of the remaining 10 percent of participants solely from data on their gut bacteria.

The Accurate Bacterial Clock
The results, described as the first method to predict a human’s chronological age via gut microbiota analysis, showed that the system was able to predict age to within four years based on the gut bacteria data. Furthermore, the results seem to indicate that 39 of the microbial species analyzed are particularly important in relation to accurately predicting age.

The study also showed that our gut microbiomes change over time. While some microbes’ numbers dwindle as we age, others seem to become more abundant. Age is not the only factor that influences the prevalence of different types of bacteria in a person’s digestive system. What you eat, how you sleep, and how physically active you are are all thought to be contributing factors.

Science Magquotes Zhavoronkov as stating that the study could lay the foundation for a “microbiome aging clock” that could serve as a baseline in future research on how a person’s gut ages and how medicine, diet, and alcohol consumption affect longevity.

Living Longer, Better
Studies of our microbiome’s influence on longevity add another dimension to our understanding of how and why we age. Other avenues of study include looking at the length of telomeres, the tips of chromosomes that are believed to play an important role in the aging process, and our DNA.

The same can be said of the role microbiomes play in relation to illnesses and conditions including allergies, diabetes, some types of cancer, and psychological states such as depression. Scientists at Harvard are even developing genetically engineered ‘telephone’ bacteria that would be able to gather precise information about the state of the gut microbiome.

A positive side effect of many of the studies is that alongside dedicated microbiome data collection efforts, they add new data—the food of AI. While we are already gaining a better understanding of the gut biome, it is not a large leap of logic to predict that AI will feast on the new data and assist us in getting an even keener understanding of what is going on in our gut and what it means for our health.

Image Credit: GiroScience / Shutterstock.com Continue reading

Posted in Human Robots