Tag Archives: application
Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.
Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.
Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.
It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.
That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.
But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.
It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.
And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.
In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.
It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.
Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.
And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).
How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.
Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.
But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.
Image Credit: Visual Generation / Shutterstock.com Continue reading
Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.
Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.
The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.
The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.
Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.
Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.
Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.
When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.
Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.
That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.
Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.
Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.
In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.
Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.
In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.
Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.
Image Credit: Nadia Snopek / Shutterstock.com Continue reading
When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.
Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.
During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.
The pace of AI-augmented healthcare innovation is only accelerating.
In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.
In this blog, I’ll expand on:
Machine learning and drug design
Artificial intelligence and big data in medicine
Healthcare, AI & China
Let’s dive in.
Machine Learning in Drug Design
What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?
Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.
And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.
It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.
This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.
One of the hottest startups in digital drug discovery today is Insilico Medicine. Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.
Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets. These molecules either already exist or can be generated de novo with the desired set of parameters.
In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system—an impressive feat made possible by converging exponential technologies.
Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.
Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.
Dr. Zhavoronkov’s ultimate goal is to develop a fully-automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.
Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.
Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.
Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.
As I discussed in my blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.
In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.
And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.
Artificial Intelligence and Data Crunching
AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives. Take WAVE, for instance. Every year, over 400,000 patients die prematurely in US hospitals as a result of heart attack or respiratory failure.
Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.
Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.
Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.
Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.
For some time, it has become physically impossible for a human physician to read—let alone remember—all of the relevant published data.
To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.
Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.
One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.
After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.
But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, and pathology and radiology reports.
In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnoses, treatments, dosages, symptoms and signs.
Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the Wall Street Journal that internal tests demonstrated that the software even performs as well as or better than other published efforts.
On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”
Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.
Healthcare, AI & China
In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.
Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.
Backed by the Chinese government, China’s largest tech companies—particularly Tencent—have now made strong entrances into healthcare.
Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.
Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous US personalized medicine startups.
Considering Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.
China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform. At the same time, 2,000 Chinese hospitals accept WeChat payments.
Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.
Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.
As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.
Next week, I’ll continue to explore this concept of AI systems in healthcare.
Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.
As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your moonshots and live out your massively transformative purpose?
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Zapp2Photo / Shutterstock.com Continue reading
Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.
Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.
These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.
Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.
They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.
In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.
A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.
If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.
The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.
Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”
Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.
Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.
But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.
By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.
Image Credit: Irvan Pratama / Shutterstock.com Continue reading