Tag Archives: expand
#435056 How Researchers Used AI to Better ...
A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.
We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.
Hassabis is about to be proven right again.
Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?
The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.
In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.
The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.
It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.
Why Vision?
ANNs and biological vision have quite the history.
In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.
In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.
That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.
Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.
It only seems fair that AI would feed back into vision neuroscience.
Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.
Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.
But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.
The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.
This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.
This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”
This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.
The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.
The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.
“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”
It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.
“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”
To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.
“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”
Image Credit: Sangoiri / Shutterstock.com Continue reading →
#435023 Inflatable Robot Astronauts and How to ...
The typical cultural image of a robot—as a steel, chrome, humanoid bucket of bolts—is often far from the reality of cutting-edge robotics research. There are difficulties, both social and technological, in realizing the image of a robot from science fiction—let alone one that can actually help around the house. Often, it’s simply the case that great expense in producing a humanoid robot that can perform dozens of tasks quite badly is less appropriate than producing some other design that’s optimized to a specific situation.
A team of scientists from Brigham Young University has received funding from NASA to investigate an inflatable robot called, improbably, King Louie. The robot was developed by Pneubotics, who have a long track record in the world of soft robotics.
In space, weight is at a premium. The world watched in awe and amusement when Commander Chris Hadfield sang “Space Oddity” from the International Space Station—but launching that guitar into space likely cost around $100,000. A good price for launching payload into outer space is on the order of $10,000 per pound ($22,000/kg).
For that price, it would cost a cool $1.7 million to launch Boston Dynamics’ famous ATLAS robot to the International Space Station, and its bulk would be inconvenient in the cramped living quarters available. By contrast, an inflatable robot like King Louie is substantially lighter and can simply be deflated and folded away when not in use. The robot can be manufactured from cheap, lightweight, and flexible materials, and minor damage is easy to repair.
Inflatable Robots Under Pressure
The concept of inflatable robots is not new: indeed, earlier prototypes of King Louie were exhibited back in 2013 at Google I/O’s After Hours, flailing away at each other in a boxing ring. Sparks might fly in fights between traditional robots, but the aim here was to demonstrate that the robots are passively safe: the soft, inflatable figures won’t accidentally smash delicate items when moving around.
Health and safety regulations form part of the reason why robots don’t work alongside humans more often, but soft robots would be far safer to use in healthcare or around children (whose first instinct, according to BYU’s promotional video, is either to hug or punch King Louie.) It’s also much harder to have nightmarish fantasies about robotic domination with these friendlier softbots: Terminator would’ve been a much shorter franchise if Skynet’s droids were inflatable.
Robotic exoskeletons are increasingly used for physical rehabilitation therapies, as well as for industrial purposes. As countries like Japan seek to care for their aging populations with robots and alleviate the burden on nurses, who suffer from some of the highest rates of back injuries of any profession, soft robots will become increasingly attractive for use in healthcare.
Precision and Proprioception
The main issue is one of control. Rigid, metallic robots may be more expensive and more dangerous, but the simple fact of their rigidity makes it easier to map out and control the precise motions of each of the robot’s limbs, digits, and actuators. Individual motors attached to these rigid robots can allow for a great many degrees of freedom—individual directions in which parts of the robot can move—and precision control.
For example, ATLAS has 28 degrees of freedom, while Shadow’s dexterous robot hand alone has 20. This is much harder to do with an inflatable robot, for precisely the same reasons that make it safer. Without hard and rigid bones, other methods of control must be used.
In the case of King Louie, the robot is made up of many expandable air chambers. An air-compressor changes the pressure levels in these air chambers, allowing them to expand and contract. This harks back to some of the earliest pneumatic automata. Pairs of chambers act antagonistically, like muscles, such that when one chamber “tenses,” another relaxes—allowing King Louie to have, for example, four degrees of freedom in each of its arms.
The robot is also surprisingly strong. Professor Killpack, who works at BYU on the project, estimates that its payload is comparable to other humanoid robots on the market, like Rethink Robotics’ Baxter (RIP).
Proprioception, that sixth sense that allows us to map out and control our own bodies and muscles in fine detail, is being enhanced for a wider range of soft, flexible robots with the use of machine learning algorithms connected to input from a whole host of sensors on the robot’s body.
Part of the reason this is so complicated with soft, flexible robots is that the shape and “map” of the robot’s body can change; that’s the whole point. But this means that every time King Louie is inflated, its body is a slightly different shape; when it becomes deformed, for example due to picking up objects, the shape changes again, and the complex ways in which the fabric can twist and bend are far more difficult to model and sense than the behavior of the rigid metal of King Louie’s hard counterparts. When you’re looking for precision, seemingly-small changes can be the difference between successfully holding an object or dropping it.
Learning to Move
Researchers at BYU are therefore spending a great deal of time on how to control the soft-bot enough to make it comparably useful. One method involves the commercial tracking technology used in the Vive VR system: by moving the game controller, which provides a constant feedback to the robot’s arm, you can control its position. Since the tracking software provides an estimate of the robot’s joint angles and continues to provide feedback until the arm is correctly aligned, this type of feedback method is likely to work regardless of small changes to the robot’s shape.
The other technologies the researchers are looking into for their softbot include arrays of flexible, tactile sensors to place on the softbot’s skin, and minimizing the complex cross-talk between these arrays to get coherent information about the robot’s environment. As with some of the new proprioception research, the project is looking into neural networks as a means of modeling the complicated dynamics—the motion and response to forces—of the softbot. This method relies on large amounts of observational data, mapping how the robot is inflated and how it moves, rather than explicitly understanding and solving the equations that govern its motion—which hopefully means the methods can work even as the robot changes.
There’s still a long way to go before soft and inflatable robots can be controlled sufficiently well to perform all the tasks they might be used for. Ultimately, no one robotic design is likely to be perfect for any situation.
Nevertheless, research like this gives us hope that one day, inflatable robots could be useful tools, or even companions, at which point the advertising slogans write themselves: Don’t let them down, and they won’t let you down!
Image Credit: Brigham Young University. Continue reading →
#434658 The Next Data-Driven Healthtech ...
Increasing your healthspan (i.e. making 100 years old the new 60) will depend to a large degree on artificial intelligence. And, as we saw in last week’s blog, healthcare AI systems are extremely data-hungry.
Fortunately, a slew of new sensors and data acquisition methods—including over 122 million wearables shipped in 2018—are bursting onto the scene to meet the massive demand for medical data.
From ubiquitous biosensors, to the mobile healthcare revolution, to the transformative power of the Health Nucleus, converging exponential technologies are fundamentally transforming our approach to healthcare.
In Part 4 of this blog series on Longevity & Vitality, I expand on how we’re acquiring the data to fuel today’s AI healthcare revolution.
In this blog, I’ll explore:
How the Health Nucleus is transforming “sick care” to healthcare
Sensors, wearables, and nanobots
The advent of mobile health
Let’s dive in.
Health Nucleus: Transforming ‘Sick Care’ to Healthcare
Much of today’s healthcare system is actually sick care. Most of us assume that we’re perfectly healthy, with nothing going on inside our bodies, until the day we travel to the hospital writhing in pain only to discover a serious or life-threatening condition.
Chances are that your ailment didn’t materialize that morning; rather, it’s been growing or developing for some time. You simply weren’t aware of it. At that point, once you’re diagnosed as “sick,” our medical system engages to take care of you.
What if, instead of this retrospective and reactive approach, you were constantly monitored, so that you could know the moment anything was out of whack?
Better yet, what if you more closely monitored those aspects of your body that your gene sequence predicted might cause you difficulty? Think: your heart, your kidneys, your breasts. Such a system becomes personalized, predictive, and possibly preventative.
This is the mission of the Health Nucleus platform built by Human Longevity, Inc. (HLI). While not continuous—that will come later, with the next generation of wearable and implantable sensors—the Health Nucleus was designed to ‘digitize’ you once per year to help you determine whether anything is going on inside your body that requires immediate attention.
The Health Nucleus visit provides you with the following tests during a half-day visit:
Whole genome sequencing (30x coverage)
Whole body (non-contrast) MRI
Brain magnetic resonance imaging/angiography (MRI/MRA)
CT (computed tomography) of the heart and lungs
Coronary artery calcium scoring
Electrocardiogram
Echocardiogram
Continuous cardiac monitoring
Clinical laboratory tests and metabolomics
In late 2018, HLI published the results of the first 1,190 clients through the Health Nucleus. The results were eye-opening—especially since these patients were all financially well-off, and already had access to the best doctors.
Following are the physiological and genomic findings in these clients who self-selected to undergo evaluation at HLI’s Health Nucleus.
Physiological Findings [TG]
Two percent had previously unknown tumors detected by MRI
2.5 percent had previously undetected aneurysms detected by MRI
Eight percent had cardiac arrhythmia found on cardiac rhythm monitoring, not previously known
Nine percent had moderate-severe coronary artery disease risk, not previously known
16 percent discovered previously unknown cardiac structure/function abnormalities
30 percent had elevated liver fat, not previously known
Genomic Findings [TG]
24 percent of clients uncovered a rare (unknown) genetic mutation found on WGS
63 percent of clients had a rare genetic mutation with a corresponding phenotypic finding
In summary, HLI’s published results found that 14.4 percent of clients had significant findings that are actionable, requiring immediate or near-term follow-up and intervention.
Long-term value findings were found in 40 percent of the clients we screened. Long-term clinical findings include discoveries that require medical attention or monitoring but are not immediately life-threatening.
The bottom line: most people truly don’t know their actual state of health. The ability to take a fully digital deep dive into your health status at least once per year will enable you to detect disease at stage zero or stage one, when it is most curable.
Sensors, Wearables, and Nanobots
Wearables, connected devices, and quantified self apps will allow us to continuously collect enormous amounts of useful health information.
Wearables like the Quanttus wristband and Vital Connect can transmit your electrocardiogram data, vital signs, posture, and stress levels anywhere on the planet.
In April 2017, we were proud to grant $2.5 million in prize money to the winning team in the Qualcomm Tricorder XPRIZE, Final Frontier Medical Devices.
Using a group of noninvasive sensors that collect data on vital signs, body chemistry, and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments.
Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.
Google is developing a full range of internal and external sensors (e.g. smart contact lenses) that can monitor the wearer’s vitals, ranging from blood sugar levels to blood chemistry.
In September 2018, Apple announced its Series 4 Apple Watch, including an FDA-approved mobile, on-the-fly ECG. Granted its first FDA approval, Apple appears to be moving deeper into the sensing healthcare market.
Further, Apple is reportedly now developing sensors that can non-invasively monitor blood sugar levels in real time for diabetic treatment. IoT-connected sensors are also entering the world of prescription drugs.
Last year, the FDA approved the first sensor-embedded pill, Abilify MyCite. This new class of digital pills can now communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.
Perhaps what is most impressive about the next generation of wearables and implantables is the density of sensors, processing, networking, and battery capability that we can now cheaply and compactly integrate.
Take the second-generation OURA ring, for example, which focuses on sleep measurement and management.
The OURA ring looks like a slightly thick wedding band, yet contains an impressive array of sensors and capabilities, including:
Two infrared LED
One infrared sensor
Three temperature sensors
One accelerometer
A six-axis gyro
A curved battery with a seven-day life
The memory, processing, and transmission capability required to connect with your smartphone
Disrupting Medical Imaging Hardware
In 2018, we saw lab breakthroughs that will drive the cost of an ultrasound sensor to below $100, in a packaging smaller than most bandages, powered by a smartphone. Dramatically disrupting ultrasound is just the beginning.
Nanobots and Nanonetworks
While wearables have long been able to track and transmit our steps, heart rate, and other health data, smart nanobots and ingestible sensors will soon be able to monitor countless new parameters and even help diagnose disease.
Some of the most exciting breakthroughs in smart nanotechnology from the past year include:
Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Technology in Zurich (ETH Zurich) demonstrated artificial microrobots that can swim and navigate through different fluids, independent of additional sensors, electronics, or power transmission.
Researchers at the University of Chicago proposed specific arrangements of DNA-based molecular logic gates to capture the information contained in the temporal portion of our cells’ communication mechanisms. Accessing the otherwise-lost time-dependent information of these cellular signals is akin to knowing the tune of a song, rather than solely the lyrics.
MIT researchers built micron-scale robots able to sense, record, and store information about their environment. These tiny robots, about 100 micrometers in diameter (approximately the size of a human egg cell), can also carry out pre-programmed computational tasks.
Engineers at University of California, San Diego developed ultrasound-powered nanorobots that swim efficiently through your blood, removing harmful bacteria and the toxins they produce.
But it doesn’t stop there.
As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.
Mobile Health
The OURA ring and the Series 4 Apple Watch are just the tip of the spear when it comes to our future of mobile health. This field, predicted to become a $102 billion market by 2022, puts an on-demand virtual doctor in your back pocket.
Step aside, WebMD.
In true exponential technology fashion, mobile device penetration has increased dramatically, while image recognition error rates and sensor costs have sharply declined.
As a result, AI-powered medical chatbots are flooding the market; diagnostic apps can identify anything from a rash to diabetic retinopathy; and with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission, and remote diagnosis by medical professionals.
Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app.
Babylon now aims to build up its AI for advanced diagnostics and even prescription. Others, like Woebot, take on mental health, using cognitive behavioral therapy in communications over Facebook messenger with patients suffering from depression.
In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice.
Next, Healthy.io, an Israeli startup, uses your smartphone and computer vision to analyze traditional urine test strips—all you need to do is take a few photos.
With mHealth platforms like ClickMedix, which connects remotely-located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?
Welcome to the age of smartphone-as-a-medical-device.
Conclusion
With these DIY data collection and diagnostic tools, we save on transportation costs (time and money), and time bottlenecks.
No longer will you need to wait for your urine or blood results to go through the current information chain: samples will be sent to the lab, analyzed by a technician, results interpreted by your doctor, and only then relayed to you.
Just like the “sage-on-the-stage” issue with today’s education system, healthcare has a “doctor-on-the-dais” problem. Current medical procedures are too complicated and expensive for a layperson to perform and analyze on their own.
The coming abundance of healthcare data promises to transform how we approach healthcare, putting the power of exponential technologies in the patient’s hands and revolutionizing how we live.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Titima Ongkantong / Shutterstock.com Continue reading →
#434637 AI Is Rapidly Augmenting Healthcare and ...
When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.
Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.
During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.
The pace of AI-augmented healthcare innovation is only accelerating.
In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.
In this blog, I’ll expand on:
Machine learning and drug design
Artificial intelligence and big data in medicine
Healthcare, AI & China
Let’s dive in.
Machine Learning in Drug Design
What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?
Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.
And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.
It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.
This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.
One of the hottest startups in digital drug discovery today is Insilico Medicine. Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.
Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets. These molecules either already exist or can be generated de novo with the desired set of parameters.
In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system—an impressive feat made possible by converging exponential technologies.
Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.
Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.
Dr. Zhavoronkov’s ultimate goal is to develop a fully-automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.
Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.
Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.
Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.
As I discussed in my blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.
In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.
And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.
Artificial Intelligence and Data Crunching
AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives. Take WAVE, for instance. Every year, over 400,000 patients die prematurely in US hospitals as a result of heart attack or respiratory failure.
Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.
Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.
Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.
Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.
For some time, it has become physically impossible for a human physician to read—let alone remember—all of the relevant published data.
To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.
Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.
One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.
After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.
But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, and pathology and radiology reports.
In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnoses, treatments, dosages, symptoms and signs.
Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the Wall Street Journal that internal tests demonstrated that the software even performs as well as or better than other published efforts.
On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”
Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.
Healthcare, AI & China
In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.
Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.
Backed by the Chinese government, China’s largest tech companies—particularly Tencent—have now made strong entrances into healthcare.
Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.
Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous US personalized medicine startups.
Considering Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.
China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform. At the same time, 2,000 Chinese hospitals accept WeChat payments.
Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.
Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.
Conclusion
As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.
Next week, I’ll continue to explore this concept of AI systems in healthcare.
Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.
As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your moonshots and live out your massively transformative purpose?
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Zapp2Photo / Shutterstock.com Continue reading →
#434246 How AR and VR Will Shape the Future of ...
How we work and play is about to transform.
After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.
Unprecedented new applications in almost every industry are exploding onto the scene.
Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.
The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.
But virtual and augmented reality are taking the future workplace to an entirely new level.
Virtual Reality Case Study: eXp Realty
I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.
Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.
In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.
Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.
With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.
Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.
As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.
But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:
Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.
Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.
Augmented Reality: Annotate and Interact with Your Workplace
As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.
Enter a professional world electrified by augmented reality.
Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.
What if we could link that information to specific locations, people, time frames, and even moving objects?
As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.
Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.
You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.
With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.
Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.
Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.
The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.
Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.
Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.
As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?
As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.
Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.
Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.
For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.
And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.
But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.
In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.
Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.
And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.
Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.
Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.
Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.
Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.
Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: nmedia / Shutterstock.com Continue reading →