Tag Archives: ai
#431733 Why Humanoid Robots Are Still So Hard to ...
Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.
Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.
We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading
#431671 The Doctor in the Machine: How AI Is ...
Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation engine that helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
Image Credit: Jolygon / Shutterstock.com Continue reading
#431603 What We Can Learn From the Second Life ...
For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading
#431599 8 Ways AI Will Transform Our Cities by ...
How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading
#431559 Drug Discovery AI to Scour a Universe of ...
On a dark night, away from city lights, the stars of the Milky Way can seem uncountable. Yet from any given location no more than 4,500 are visible to the naked eye. Meanwhile, our galaxy has 100–400 billion stars, and there are even more galaxies in the universe.
The numbers of the night sky are humbling. And they give us a deep perspective…on drugs.
Yes, this includes wow-the-stars-are-freaking-amazing-tonight drugs, but also the kinds of drugs that make us well again when we’re sick. The number of possible organic compounds with “drug-like” properties dwarfs the number of stars in the universe by over 30 orders of magnitude.
Next to this multiverse of possibility, the chemical configurations scientists have made into actual medicines are like the smattering of stars you’d glimpse downtown.
But for good reason.
Exploring all that potential drug-space is as humanly impossible as exploring all of physical space, and even if we could, most of what we’d find wouldn’t fit our purposes. Still, the idea that wonder drugs must surely lurk amid the multitudes is too tantalizing to ignore.
Which is why, Alex Zhavoronkov said at Singularity University’s Exponential Medicine in San Diego last week, we should use artificial intelligence to do more of the legwork and speed discovery. This, he said, could be one of the next big medical applications for AI.
Dogs, Diagnosis, and Drugs
Zhavoronkov is CEO of Insilico Medicine and CSO of the Biogerontology Research Foundation. Insilico is one of a number of AI startups aiming to accelerate drug discovery with AI.
In recent years, Zhavoronkov said, the now-famous machine learning technique, deep learning, has made progress on a number of fronts. Algorithms that can teach themselves to play games—like DeepMind’s AlphaGo Zero or Carnegie Mellon’s poker playing AI—are perhaps the most headline-grabbing of the bunch. But pattern recognition was the thing that kicked deep learning into overdrive early on, when machine learning algorithms went from struggling to tell dogs and cats apart to outperforming their peers and then their makers in quick succession.
[Watch this video for an AI update from Neil Jacobstein, chair of Artificial Intelligence and Robotics at Singularity University.]
In medicine, deep learning algorithms trained on databases of medical images can spot life-threatening disease with equal or greater accuracy than human professionals. There’s even speculation that AI, if we learn to trust it, could be invaluable in diagnosing disease. And, as Zhavoronkov noted, with more applications and a longer track record that trust is coming.
“Tesla is already putting cars on the street,” Zhavoronkov said. “Three-year, four-year-old technology is already carrying passengers from point A to point B, at 100 miles an hour, and one mistake and you’re dead. But people are trusting their lives to this technology.”
“So, why don’t we do it in pharma?”
Trial and Error and Try Again
AI wouldn’t drive the car in pharmaceutical research. It’d be an assistant that, when paired with a chemist or two, could fast-track discovery by screening more possibilities for better candidates.
There’s plenty of room to make things more efficient, according to Zhavoronkov.
Drug discovery is arduous and expensive. Chemists sift tens of thousands of candidate compounds for the most promising to synthesize. Of these, a handful will go on to further research, fewer will make it to human clinical trials, and a fraction of those will be approved.
The whole process can take many years and cost hundreds of millions of dollars.
This is a big data problem if ever there was one, and deep learning thrives on big data. Early applications have shown their worth unearthing subtle patterns in huge training databases. Although drug-makers already use software to sift compounds, such software requires explicit rules written by chemists. AI’s allure is its ability to learn and improve on its own.
“There are two strategies for AI-driven innovation in pharma to ensure you get better molecules and much faster approvals,” Zhavoronkov said. “One is looking for the needle in the haystack, and another one is creating a new needle.”
To find the needle in the haystack, algorithms are trained on large databases of molecules. Then they go looking for molecules with attractive properties. But creating a new needle? That’s a possibility enabled by the generative adversarial networks Zhavoronkov specializes in.
Such algorithms pit two neural networks against each other. One generates meaningful output while the other judges whether this output is true or false, Zhavoronkov said. Together, the networks generate new objects like text, images, or in this case, molecular structures.
“We started employing this particular technology to make deep neural networks imagine new molecules, to make it perfect right from the start. So, to come up with really perfect needles,” Zhavoronkov said. “[You] can essentially go to this [generative adversarial network] and ask it to create molecules that inhibit protein X at concentration Y, with the highest viability, specific characteristics, and minimal side effects.”
Zhavoronkov believes AI can find or fabricate more needles from the array of molecular possibilities, freeing human chemists to focus on synthesizing only the most promising. If it works, he hopes we can increase hits, minimize misses, and generally speed the process up.
Proof’s in the Pudding
Insilico isn’t alone on its drug-discovery quest, nor is it a brand new area of interest.
Last year, a Harvard group published a paper on an AI that similarly suggests drug candidates. The software trained on 250,000 drug-like molecules and used its experience to generate new molecules that blended existing drugs and made suggestions based on desired properties.
An MIT Technology Review article on the subject highlighted a few of the challenges such systems may still face. The results returned aren’t always meaningful or easy to synthesize in the lab, and the quality of these results, as always, is only as good as the data dined upon.
Stanford chemistry professor and Andreesen Horowitz partner, Vijay Pande, said that images, speech, and text—three of the areas deep learning’s made quick strides in—have better, cleaner data. Chemical data, on the other hand, is still being optimized for deep learning. Also, while there are public databases, much data still lives behind closed doors at private companies.
To overcome the challenges and prove their worth, Zhavoronkov said, his company is very focused on validating the tech. But this year, skepticism in the pharmaceutical industry seems to be easing into interest and investment.
AI drug discovery startup Exscientia inked a deal with Sanofi for $280 million and GlaxoSmithKline for $42 million. Insilico is also partnering with GlaxoSmithKline, and Numerate is working with Takeda Pharmaceutical. Even Google may jump in. According to an article in Nature outlining the field, the firm’s deep learning project, Google Brain, is growing its biosciences team, and industry watchers wouldn’t be surprised to see them target drug discovery.
With AI and the hardware running it advancing rapidly, the greatest potential may yet be ahead. Perhaps, one day, all 1060 molecules in drug-space will be at our disposal. “You should take all the data you have, build n new models, and search as much of that 1060 as possible” before every decision you make, Brandon Allgood, CTO at Numerate, told Nature.
Today’s projects need to live up to their promises, of course, but Zhavoronkov believes AI will have a big impact in the coming years, and now’s the time to integrate it. “If you are working for a pharma company, and you’re still thinking, ‘Okay, where is the proof?’ Once there is a proof, and once you can see it to believe it—it’s going to be too late,” he said.
Image Credit: Klavdiya Krinichnaya / Shutterstock.com Continue reading