Tag Archives: model
#435575 How an AI Startup Designed a Drug ...
Discovering a new drug can take decades, billions of dollars, and untold man hours from some of the smartest people on the planet. Now a startup says it’s taken a significant step towards speeding the process up using AI.
The typical drug discovery process involves carrying out physical tests on enormous libraries of molecules, and even with the help of robotics it’s an arduous process. The idea of sidestepping this by using computers to virtually screen for promising candidates has been around for decades. But progress has been underwhelming, and it’s still not a major part of commercial pipelines.
Recent advances in deep learning, however, have reignited hopes for the field, and major pharma companies have started tying up with AI-powered drug discovery startups. And now Insilico Medicine has used AI to design a molecule that effectively targets a protein involved in fibrosis—the formation of excess fibrous tissue—in mice in just 46 days.
The platform the company has developed combines two of the hottest sub-fields of AI: the generative adversarial networks, or GANs, which power deepfakes, and reinforcement learning, which is at the heart of the most impressive game-playing AI advances of recent years.
In a paper in Nature Biotechnology, the company’s researchers describe how they trained their model on all the molecules already known to target this protein as well as many other active molecules from various datasets. The model was then used to generate 30,000 candidate molecules.
Unlike most previous efforts, they went a step further and selected the most promising molecules for testing in the lab. The 30,000 candidates were whittled down to just 6 using more conventional drug discovery approaches and were then synthesized in the lab. They were put through increasingly stringent tests, but the leading candidate was found to be effective at targeting the desired protein and behaved as one would hope a drug would.
The authors are clear that the results are just a proof-of-concept, which company CEO Alex Zhavoronkov told Wired stemmed from a challenge set by a pharma partner to design a drug as quickly as possible. But they say they were able to carry out the process faster than traditional methods for a fraction of the cost.
There are some caveats. For a start, the protein being targeted is already very well known and multiple effective drugs exist for it. That gave the company a wealth of data to train their model on, something that isn’t the case for many of the diseases where we urgently need new drugs.
The company’s platform also only targets the very initial stages of the drug discovery process. The authors concede in their paper that the molecules would still take considerable optimization in the lab before they’d be true contenders for clinical trials.
“And that is where you will start to begin to commence to spend the vast piles of money that you will eventually go through in trying to get a drug to market,” writes Derek Lowe in his blog In The Pipeline. The part of the discovery process that the platform tackles represents a tiny fraction of the total cost of drug development, he says.
Nonetheless, the research is a definite advance for virtual screening technology and an important marker of the potential of AI for designing new medicines. Zhavoronkov also told Wired that this research was done more than a year ago, and they’ve since adapted the platform to go after harder drug targets with less data.
And with big pharma companies desperate to slash their ballooning development costs and find treatments for a host of intractable diseases, they can use all the help they can get.
Image Credit: freestocks.org / Unsplash Continue reading
#435260 How Tech Can Help Curb Emissions by ...
Trees are a low-tech, high-efficiency way to offset much of humankind’s negative impact on the climate. What’s even better, we have plenty of room for a lot more of them.
A new study conducted by researchers at Switzerland’s ETH-Zürich, published in Science, details how Earth could support almost an additional billion hectares of trees without the new forests pushing into existing urban or agricultural areas. Once the trees grow to maturity, they could store more than 200 billion metric tons of carbon.
Great news indeed, but it still leaves us with some huge unanswered questions. Where and how are we going to plant all the new trees? What kind of trees should we plant? How can we ensure that the new forests become a boon for people in those areas?
Answers to all of the above likely involve technology.
Math + Trees = Challenges
The ETH-Zürich research team combined Google Earth mapping software with a database of nearly 80,000 existing forests to create a predictive model for optimal planting locations. In total, 0.9 billion hectares of new, continuous forest could be planted. Once mature, the 500 billion new trees in these forests would be capable of storing about two-thirds of the carbon we have emitted since the industrial revolution.
Other researchers have noted that the study may overestimate how efficient trees are at storing carbon, as well as underestimate how much carbon humans have emitted over time. However, all seem to agree that new forests would offset much of our cumulative carbon emissions—still an impressive feat as the target of keeping global warming this century at under 1.5 degrees Celsius becomes harder and harder to reach.
Recently, there was a story about a Brazilian couple who replanted trees in the valley where they live. The couple planted about 2.7 million trees in two decades. Back-of-the-napkin math shows that they on average planted 370 trees a day, meaning planting 500 billion trees would take about 3.7 million years. While an over-simplification, the point is that planting trees by hand is not realistic. Even with a million people going at a rate of 370 trees a day, it would take 83 years. Current technologies are also not likely to be able to meet the challenge, especially in remote locations.
Tree-Bombing Drones
Technology can speed up the planting process, including a new generation of drones that take tree planting to the skies. Drone planting generally involves dropping biodegradable seed pods at a designated area. The pods dissolve over time, and the tree seeds grow in the earth below. DroneSeed is one example; its 55-pound drones can plant up to 800 seeds an hour. Another startup, Biocarbon Engineering, has used various techniques, including drones, to plant 38 different species of trees across three continents.
Drone planting has distinct advantages when it comes to planting in hard-to-access areas—one example is mangrove forests, which are disappearing rapidly, increasing the risk of floods and storm surges.
Challenges include increasing the range and speed of drone planting, and perhaps most importantly, the success rate, as automatic planting from a height is still likely to be less accurate when it comes to what depth the tree saplings are planted. However, drones are already showing impressive numbers for sapling survival rates.
AI, Sensors, and Eye-In-the-Sky
Planting the trees is the first step in a long road toward an actual forest. Companies are leveraging artificial intelligence and satellite imagery in a multitude of ways to increase protection and understanding of forested areas.
20tree.ai, a Portugal-based startup, uses AI to analyze satellite imagery and monitor the state of entire forests at a fraction of the cost of manual monitoring. The approach can lead to faster identification of threats like pest infestation and a better understanding of the state of forests.
AI can also play a pivotal role in protecting existing forest areas by predicting where deforestation is likely to occur.
Closer to the ground—and sometimes in it—new networks of sensors can provide detailed information about the state and needs of trees. One such project is Trace, where individual trees are equipped with a TreeTalker, an internet of things-based device that can provide real-time monitoring of the tree’s functions and well-being. The information can be used to, among other things, optimize the use of available resources, such as providing the exact amount of water a tree needs.
Budding Technologies Are Controversial
Trees are in many ways fauna’s marathon runners—slow-growing and sturdy, but still susceptible to sickness and pests. Many deforested areas are likely not as rich in nutrients as they once were, which could slow down reforestation. Much of the positive impact that said trees could have on carbon levels in the atmosphere is likely decades away.
Bioengineering, for example through CRISPR, could provide solutions, making trees more resistant and faster-growing. Such technologies are being explored in relation to Ghana’s at-risk cocoa trees. Other exponential technologies could also hold much future potential—for instance micro-robots to assist the dwindling number of bees with pollination.
These technologies remain mired in controversy, and perhaps rightfully so. Bioengineering’s massive potential is for many offset by the inherent risks of engineered plants out-competing existing fauna or growing beyond our control. Micro-robots for pollination may solve a problem, but don’t do much to address the root cause: that we seem to be disrupting and destroying integral parts of natural cycles.
Tech Not The Whole Answer
So, is it realistic to plant 500 billion new trees? The short answer would be that yes, it’s possible—with the help of technology.
However, there are many unanswered challenges. For example, many of areas identified by the ETH-Zürich research team are not readily available for reforestation. Some are currently reserved for grazing, others owned by private entities, and others again are located in remote areas or areas prone to political instability, beyond the reach of most replanting efforts.
If we do wish to plant 500 billion trees to offset some of the negative impacts we have had on the planet, we might well want to combine the best of exponential technology with reforestation as well as a move to other forms of agriculture.
Such an approach might also help address a major issue: that few of the proposed new forests will likely succeed without ensuring that people living in and around the areas where reforestation takes place become involved, and can reap rewards from turning arable land into forests.
Image Credit: Lillac/Shutterstock.com Continue reading
#435161 Less Like Us: An Alternate Theory of ...
The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.
Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”
But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.
Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.
Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.
This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.
Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.
With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.
What does this mean? Drexler’s argument is that we should look more closely at how machine learning and AI algorithms are actually being developed in the real world. The optimization effort is going into producing algorithms that can provide services and perform tasks like translation, music recommendations, classification, medical diagnoses, and so forth.
AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occurring—take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.
Many Smart Arms, No Smart Brain
Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.
One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine, looking through the tasks it can perform to find the closest match and calling upon a series of subroutines to achieve the goal.
For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain (which we must try to psychoanalyze in advance without really knowing what it will look like), we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialized algorithms that have been developed by different groups.
This skirts the complex problem of consciousness and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased” or forced to labor in dull and repetitive tasks, hove into view.
Drexler argues that, in this world, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains, if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.
The Future In Our Hands?
Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence,” but the behavior that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive.
But in Drexler’s case, the research and development capacity comes from humans and organizations driven by the desire to improve algorithms that are performing individualized and useful tasks, rather than from a conscious AI recursively reprogramming and improving itself.
In other words, this vision does not absolve us of the responsibility of making our AI safe; if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe.
Equally, as machine learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers—and trying to predict the complex ways they might interact with each other—is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.
The CAIS model bridges the gap between real-world AI, machine learning developments, and real-world safety considerations, as well as the speculative world of superintelligent agents and the safety considerations involved with controlling their behavior. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies—and we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.
Image Credit: MF Production/Shutterstock.com Continue reading