Tag Archives: since
#435575 How an AI Startup Designed a Drug ...
Discovering a new drug can take decades, billions of dollars, and untold man hours from some of the smartest people on the planet. Now a startup says it’s taken a significant step towards speeding the process up using AI.
The typical drug discovery process involves carrying out physical tests on enormous libraries of molecules, and even with the help of robotics it’s an arduous process. The idea of sidestepping this by using computers to virtually screen for promising candidates has been around for decades. But progress has been underwhelming, and it’s still not a major part of commercial pipelines.
Recent advances in deep learning, however, have reignited hopes for the field, and major pharma companies have started tying up with AI-powered drug discovery startups. And now Insilico Medicine has used AI to design a molecule that effectively targets a protein involved in fibrosis—the formation of excess fibrous tissue—in mice in just 46 days.
The platform the company has developed combines two of the hottest sub-fields of AI: the generative adversarial networks, or GANs, which power deepfakes, and reinforcement learning, which is at the heart of the most impressive game-playing AI advances of recent years.
In a paper in Nature Biotechnology, the company’s researchers describe how they trained their model on all the molecules already known to target this protein as well as many other active molecules from various datasets. The model was then used to generate 30,000 candidate molecules.
Unlike most previous efforts, they went a step further and selected the most promising molecules for testing in the lab. The 30,000 candidates were whittled down to just 6 using more conventional drug discovery approaches and were then synthesized in the lab. They were put through increasingly stringent tests, but the leading candidate was found to be effective at targeting the desired protein and behaved as one would hope a drug would.
The authors are clear that the results are just a proof-of-concept, which company CEO Alex Zhavoronkov told Wired stemmed from a challenge set by a pharma partner to design a drug as quickly as possible. But they say they were able to carry out the process faster than traditional methods for a fraction of the cost.
There are some caveats. For a start, the protein being targeted is already very well known and multiple effective drugs exist for it. That gave the company a wealth of data to train their model on, something that isn’t the case for many of the diseases where we urgently need new drugs.
The company’s platform also only targets the very initial stages of the drug discovery process. The authors concede in their paper that the molecules would still take considerable optimization in the lab before they’d be true contenders for clinical trials.
“And that is where you will start to begin to commence to spend the vast piles of money that you will eventually go through in trying to get a drug to market,” writes Derek Lowe in his blog In The Pipeline. The part of the discovery process that the platform tackles represents a tiny fraction of the total cost of drug development, he says.
Nonetheless, the research is a definite advance for virtual screening technology and an important marker of the potential of AI for designing new medicines. Zhavoronkov also told Wired that this research was done more than a year ago, and they’ve since adapted the platform to go after harder drug targets with less data.
And with big pharma companies desperate to slash their ballooning development costs and find treatments for a host of intractable diseases, they can use all the help they can get.
Image Credit: freestocks.org / Unsplash Continue reading
#435436 Undeclared Wars in Cyberspace Are ...
The US is at war. That’s probably not exactly news, as the country has been engaged in one type of conflict or another for most of its history. The last time we officially declared war was after Japan bombed Pearl Harbor in December 1941.
Our biggest undeclared war today is not being fought by drones in the mountains of Afghanistan or even through the less-lethal barrage of threats over the nuclear programs in North Korea and Iran. In this particular war, it is the US that is under attack and on the defensive.
This is cyberwarfare.
The definition of what constitutes a cyber attack is a broad one, according to Greg White, executive director of the Center for Infrastructure Assurance and Security (CIAS) at The University of Texas at San Antonio (UTSA).
At the level of nation-state attacks, cyberwarfare could involve “attacking systems during peacetime—such as our power grid or election systems—or it could be during war time in which case the attacks may be designed to cause destruction, damage, deception, or death,” he told Singularity Hub.
For the US, the Pearl Harbor of cyberwarfare occurred during 2016 with the Russian interference in the presidential election. However, according to White, an Air Force veteran who has been involved in computer and network security since 1986, the history of cyber war can be traced back much further, to at least the first Gulf War of the early 1990s.
“We started experimenting with cyber attacks during the first Gulf War, so this has been going on a long time,” he said. “Espionage was the prime reason before that. After the war, the possibility of expanding the types of targets utilized expanded somewhat. What is really interesting is the use of social media and things like websites for [psychological operation] purposes during a conflict.”
The 2008 conflict between Russia and the Republic of Georgia is often cited as a cyberwarfare case study due to the large scale and overt nature of the cyber attacks. Russian hackers managed to bring down more than 50 news, government, and financial websites through denial-of-service attacks. In addition, about 35 percent of Georgia’s internet networks suffered decreased functionality during the attacks, coinciding with the Russian invasion of South Ossetia.
The cyberwar also offers lessons for today on Russia’s approach to cyberspace as a tool for “holistic psychological manipulation and information warfare,” according to a 2018 report called Understanding Cyberwarfare from the Modern War Institute at West Point.
US Fights Back
News in recent years has highlighted how Russian hackers have attacked various US government entities and critical infrastructure such as energy and manufacturing. In particular, a shadowy group known as Unit 26165 within the country’s military intelligence directorate is believed to be behind the 2016 US election interference campaign.
However, the US hasn’t been standing idly by. Since at least 2012, the US has put reconnaissance probes into the control systems of the Russian electric grid, The New York Times reported. More recently, we learned that the US military has gone on the offensive, putting “crippling malware” inside the Russian power grid as the U.S. Cyber Command flexes its online muscles thanks to new authority granted to it last year.
“Access to the power grid that is obtained now could be used to shut something important down in the future when we are in a war,” White noted. “Espionage is part of the whole program. It is important to remember that cyber has just provided a new domain in which to conduct the types of activities we have been doing in the real world for years.”
The US is also beginning to pour more money into cybersecurity. The 2020 fiscal budget calls for spending $17.4 billion throughout the government on cyber-related activities, with the Department of Defense (DoD) alone earmarked for $9.6 billion.
Despite the growing emphasis on cybersecurity in the US and around the world, the demand for skilled security professionals is well outpacing the supply, with a projected shortfall of nearly three million open or unfilled positions according to the non-profit IT security organization (ISC)².
UTSA is rare among US educational institutions in that security courses and research are being conducted across three different colleges, according to White. About 10 percent of the school’s 30,000-plus students are enrolled in a cyber-related program, he added, and UTSA is one of only 21 schools that has received the Cyber Operations Center of Excellence designation from the National Security Agency.
“This track in the computer science program is specifically designed to prepare students for the type of jobs they might be involved in if they went to work for the DoD,” White said.
However, White is extremely doubtful there will ever be enough cyber security professionals to meet demand. “I’ve been preaching that we’ve got to worry about cybersecurity in the workforce, not just the cybersecurity workforce, not just cybersecurity professionals. Everybody has a responsibility for cybersecurity.”
Artificial Intelligence in Cybersecurity
Indeed, humans are often seen as the weak link in cybersecurity. That point was driven home at a cybersecurity roundtable discussion during this year’s Brainstorm Tech conference in Aspen, Colorado.
Participant Dorian Daley, general counsel at Oracle, said insider threats are at the top of the list when it comes to cybersecurity. “Sadly, I think some of the biggest challenges are people, and I mean that in a number of ways. A lot of the breaches really come from insiders. So the more that you can automate things and you can eliminate human malicious conduct, the better.”
White noted that automation is already the norm in cybersecurity. “Humans can’t react as fast as systems can launch attacks, so we need to rely on automated defenses as well,” he said. “This doesn’t mean that humans are not in the loop, but much of what is done these days is ‘scripted’.”
The use of artificial intelligence, machine learning, and other advanced automation techniques have been part of the cybersecurity conversation for quite some time, according to White, such as pattern analysis to look for specific behaviors that might indicate an attack is underway.
“What we are seeing quite a bit of today falls under the heading of big data and data analytics,” he explained.
But there are signs that AI is going off-script when it comes to cyber attacks. In the hands of threat groups, AI applications could lead to an increase in the number of cyberattacks, wrote Michelle Cantos, a strategic intelligence analyst at cybersecurity firm FireEye.
“Current AI technology used by businesses to analyze consumer behavior and find new customer bases can be appropriated to help attackers find better targets,” she said. “Adversaries can use AI to analyze datasets and generate recommendations for high-value targets they think the adversary should hit.”
In fact, security researchers have already demonstrated how a machine learning system could be used for malicious purposes. The Social Network Automated Phishing with Reconnaissance system, or SNAP_R, generated more than four times as many spear-phishing tweets on Twitter than a human—and was just as successful at targeting victims in order to steal sensitive information.
Cyber war is upon us. And like the current war on terrorism, there are many battlefields from which the enemy can attack and then disappear. While total victory is highly unlikely in the traditional sense, innovations through AI and other technologies can help keep the lights on against the next cyber attack.
Image Credit: pinkeyes / Shutterstock.com Continue reading
#435308 Brain-Machine Interfaces Are Getting ...
Elon Musk grabbed a lot of attention with his July 16 announcement that his company Neuralink plans to implant electrodes into the brains of people with paralysis by next year. Their first goal is to create assistive technology to help people who can’t move or are unable to communicate.
If you haven’t been paying attention, brain-machine interfaces (BMIs) that allow people to control robotic arms with their thoughts might sound like science fiction. But science and engineering efforts have already turned it into reality.
In a few research labs around the world, scientists and physicians have been implanting devices into the brains of people who have lost the ability to control their arms or hands for over a decade. In our own research group at the University of Pittsburgh, we’ve enabled people with paralyzed arms and hands to control robotic arms that allow them to grasp and move objects with relative ease. They can even experience touch-like sensations from their own hand when the robot grasps objects.
At its core, a BMI is pretty straightforward. In your brain, microscopic cells called neurons are sending signals back and forth to each other all the time. Everything you think, do and feel as you interact with the world around you is the result of the activity of these 80 billion or so neurons.
If you implant a tiny wire very close to one of these neurons, you can record the electrical activity it generates and send it to a computer. Record enough of these signals from the right area of the brain and it becomes possible to control computers, robots, or anything else you might want, simply by thinking about moving. But doing this comes with tremendous technical challenges, especially if you want to record from hundreds or thousands of neurons.
What Neuralink Is Bringing to the Table
Elon Musk founded Neuralink in 2017, aiming to address these challenges and raise the bar for implanted neural interfaces.
Perhaps the most impressive aspect of Neuralink’s system is the breadth and depth of their approach. Building a BMI is inherently interdisciplinary, requiring expertise in electrode design and microfabrication, implantable materials, surgical methods, electronics, packaging, neuroscience, algorithms, medicine, regulatory issues, and more. Neuralink has created a team that spans most, if not all, of these areas.
With all of this expertise, Neuralink is undoubtedly moving the field forward, and improving their technology rapidly. Individually, many of the components of their system represent significant progress along predictable paths. For example, their electrodes, that they call threads, are very small and flexible; many researchers have tried to harness those properties to minimize the chance the brain’s immune response would reject the electrodes after insertion. Neuralink has also developed high-performance miniature electronics, another focus area for labs working on BMIs.
Often overlooked in academic settings, however, is how an entire system would be efficiently implanted in a brain.
Neuralink’s BMI requires brain surgery. This is because implanted electrodes that are in intimate contact with neurons will always outperform non-invasive electrodes where neurons are far away from the electrodes sitting outside the skull. So, a critical question becomes how to minimize the surgical challenges around getting the device into a brain.
Maybe the most impressive aspect of Neuralink’s announcement was that they created a 3,000-electrode neural interface where electrodes could be implanted at a rate of between 30 and 200 per minute. Each thread of electrodes is implanted by a sophisticated surgical robot that essentially acts like a sewing machine. This all happens while specifically avoiding blood vessels that blanket the surface of the brain. The robotics and imaging that enable this feat, with tight integration to the entire device, is striking.
Neuralink has thought through the challenge of developing a clinically viable BMI from beginning to end in a way that few groups have done, though they acknowledge that many challenges remain as they work towards getting this technology into human patients in the clinic.
Figuring Out What More Electrodes Gets You
The quest for implantable devices with thousands of electrodes is not only the domain of private companies. DARPA, the NIH BRAIN Initiative, and international consortiums are working on neurotechnologies for recording and stimulating in the brain with goals of tens of thousands of electrodes. But what might scientists do with the information from 1,000, 3,000, or maybe even 100,000 neurons?
At some level, devices with more electrodes might not actually be necessary to have a meaningful impact in people’s lives. Effective control of computers for access and communication, of robotic limbs to grasp and move objects as well as of paralyzed muscles is already happening—in people. And it has been for a number of years.
Since the 1990s, the Utah Array, which has just 100 electrodes and is manufactured by Blackrock Microsystems, has been a critical device in neuroscience and clinical research. This electrode array is FDA-cleared for temporary neural recording. Several research groups, including our own, have implanted Utah Arrays in people that lasted multiple years.
Currently, the biggest constraints are related to connectors, electronics, and system-level engineering, not the implanted electrode itself—although increasing the electrodes’ lifespan to more than five years would represent a significant advance. As those technical capabilities improve, it might turn out that the ability to accurately control computers and robots is limited more by scientists’ understanding of what the neurons are saying—that is, the neural code—than by the number of electrodes on the device.
Even the most capable implanted system, and maybe the most capable devices researchers can reasonably imagine, might fall short of the goal of actually augmenting skilled human performance. Nevertheless, Neuralink’s goal of creating better BMIs has the potential to improve the lives of people who can’t move or are unable to communicate. Right now, Musk’s vision of using BMIs to meld physical brains and intelligence with artificial ones is no more than a dream.
So, what does the future look like for Neuralink and other groups creating implantable BMIs? Devices with more electrodes that last longer and are connected to smaller and more powerful wireless electronics are essential. Better devices themselves, however, are insufficient. Continued public and private investment in companies and academic research labs, as well as innovative ways for these groups to work together to share technologies and data, will be necessary to truly advance scientists’ understanding of the brain and deliver on the promise of BMIs to improve peoples’ lives.
While researchers need to keep the future societal implications of advanced neurotechnologies in mind—there’s an essential role for ethicists and regulation—BMIs could be truly transformative as they help more people overcome limitations caused by injury or disease in the brain and body.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: UPMC/Pitt Health Sciences, / CC BY-NC-ND Continue reading
#435260 How Tech Can Help Curb Emissions by ...
Trees are a low-tech, high-efficiency way to offset much of humankind’s negative impact on the climate. What’s even better, we have plenty of room for a lot more of them.
A new study conducted by researchers at Switzerland’s ETH-Zürich, published in Science, details how Earth could support almost an additional billion hectares of trees without the new forests pushing into existing urban or agricultural areas. Once the trees grow to maturity, they could store more than 200 billion metric tons of carbon.
Great news indeed, but it still leaves us with some huge unanswered questions. Where and how are we going to plant all the new trees? What kind of trees should we plant? How can we ensure that the new forests become a boon for people in those areas?
Answers to all of the above likely involve technology.
Math + Trees = Challenges
The ETH-Zürich research team combined Google Earth mapping software with a database of nearly 80,000 existing forests to create a predictive model for optimal planting locations. In total, 0.9 billion hectares of new, continuous forest could be planted. Once mature, the 500 billion new trees in these forests would be capable of storing about two-thirds of the carbon we have emitted since the industrial revolution.
Other researchers have noted that the study may overestimate how efficient trees are at storing carbon, as well as underestimate how much carbon humans have emitted over time. However, all seem to agree that new forests would offset much of our cumulative carbon emissions—still an impressive feat as the target of keeping global warming this century at under 1.5 degrees Celsius becomes harder and harder to reach.
Recently, there was a story about a Brazilian couple who replanted trees in the valley where they live. The couple planted about 2.7 million trees in two decades. Back-of-the-napkin math shows that they on average planted 370 trees a day, meaning planting 500 billion trees would take about 3.7 million years. While an over-simplification, the point is that planting trees by hand is not realistic. Even with a million people going at a rate of 370 trees a day, it would take 83 years. Current technologies are also not likely to be able to meet the challenge, especially in remote locations.
Tree-Bombing Drones
Technology can speed up the planting process, including a new generation of drones that take tree planting to the skies. Drone planting generally involves dropping biodegradable seed pods at a designated area. The pods dissolve over time, and the tree seeds grow in the earth below. DroneSeed is one example; its 55-pound drones can plant up to 800 seeds an hour. Another startup, Biocarbon Engineering, has used various techniques, including drones, to plant 38 different species of trees across three continents.
Drone planting has distinct advantages when it comes to planting in hard-to-access areas—one example is mangrove forests, which are disappearing rapidly, increasing the risk of floods and storm surges.
Challenges include increasing the range and speed of drone planting, and perhaps most importantly, the success rate, as automatic planting from a height is still likely to be less accurate when it comes to what depth the tree saplings are planted. However, drones are already showing impressive numbers for sapling survival rates.
AI, Sensors, and Eye-In-the-Sky
Planting the trees is the first step in a long road toward an actual forest. Companies are leveraging artificial intelligence and satellite imagery in a multitude of ways to increase protection and understanding of forested areas.
20tree.ai, a Portugal-based startup, uses AI to analyze satellite imagery and monitor the state of entire forests at a fraction of the cost of manual monitoring. The approach can lead to faster identification of threats like pest infestation and a better understanding of the state of forests.
AI can also play a pivotal role in protecting existing forest areas by predicting where deforestation is likely to occur.
Closer to the ground—and sometimes in it—new networks of sensors can provide detailed information about the state and needs of trees. One such project is Trace, where individual trees are equipped with a TreeTalker, an internet of things-based device that can provide real-time monitoring of the tree’s functions and well-being. The information can be used to, among other things, optimize the use of available resources, such as providing the exact amount of water a tree needs.
Budding Technologies Are Controversial
Trees are in many ways fauna’s marathon runners—slow-growing and sturdy, but still susceptible to sickness and pests. Many deforested areas are likely not as rich in nutrients as they once were, which could slow down reforestation. Much of the positive impact that said trees could have on carbon levels in the atmosphere is likely decades away.
Bioengineering, for example through CRISPR, could provide solutions, making trees more resistant and faster-growing. Such technologies are being explored in relation to Ghana’s at-risk cocoa trees. Other exponential technologies could also hold much future potential—for instance micro-robots to assist the dwindling number of bees with pollination.
These technologies remain mired in controversy, and perhaps rightfully so. Bioengineering’s massive potential is for many offset by the inherent risks of engineered plants out-competing existing fauna or growing beyond our control. Micro-robots for pollination may solve a problem, but don’t do much to address the root cause: that we seem to be disrupting and destroying integral parts of natural cycles.
Tech Not The Whole Answer
So, is it realistic to plant 500 billion new trees? The short answer would be that yes, it’s possible—with the help of technology.
However, there are many unanswered challenges. For example, many of areas identified by the ETH-Zürich research team are not readily available for reforestation. Some are currently reserved for grazing, others owned by private entities, and others again are located in remote areas or areas prone to political instability, beyond the reach of most replanting efforts.
If we do wish to plant 500 billion trees to offset some of the negative impacts we have had on the planet, we might well want to combine the best of exponential technology with reforestation as well as a move to other forms of agriculture.
Such an approach might also help address a major issue: that few of the proposed new forests will likely succeed without ensuring that people living in and around the areas where reforestation takes place become involved, and can reap rewards from turning arable land into forests.
Image Credit: Lillac/Shutterstock.com Continue reading