Tag Archives: algorithms
#436911 Scientists Linked Artificial and ...
Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.
Whoa.
We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.
As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.
This month, an international team put all of those ingredients together, turning theory into reality.
The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.
The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.
That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.
And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.
The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.
The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.
Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.
Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.
Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.
That’s what this study did.
A Hybrid Network
Still with me? Let’s talk network.
It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.
Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).
So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.
To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.
Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.
Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.
You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.
Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.
Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.
It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.
However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.
While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.
“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”
Image Credit: Gerd Altmann from Pixabay Continue reading
#436774 AI Is an Energy-Guzzler. We Need to ...
There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.
Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.
The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.
For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.
The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.
OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.
Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.
One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.
To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.
The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.
An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”
Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.
The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.
Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.
Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.
It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.
“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.
He’s not the only one.
One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.
One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.
It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.
A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.
The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.
The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.
Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.
“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.
Image Credit: analogicus from Pixabay Continue reading
#436578 AI Just Discovered a New Antibiotic to ...
Penicillin, one of the greatest discoveries in the history of medicine, was a product of chance.
After returning from summer vacation in September 1928, bacteriologist Alexander Fleming found a colony of bacteria he’d left in his London lab had sprouted a fungus. Curiously, wherever the bacteria contacted the fungus, their cell walls broke down and they died. Fleming guessed the fungus was secreting something lethal to the bacteria—and the rest is history.
Fleming’s discovery of penicillin and its later isolation, synthesis, and scaling in the 1940s released a flood of antibiotic discoveries in the next few decades. Bacteria and fungi had been waging an ancient war against each other, and the weapons they’d evolved over eons turned out to be humanity’s best defense against bacterial infection and disease.
In recent decades, however, the flood of new antibiotics has slowed to a trickle.
Their development is uneconomical for drug companies, and the low-hanging fruit has long been picked. We’re now facing the emergence of strains of super bacteria resistant to one or more antibiotics and an aging arsenal to fight them with. Gone unchallenged, an estimated 700,000 deaths worldwide due to drug resistance could rise to as many as 10 million in 2050.
Increasingly, scientists warn the tide is turning, and we need a new strategy to keep pace with the remarkably quick and boundlessly creative tactics of bacterial evolution.
But where the golden age of antibiotics was sparked by serendipity, human intelligence, and natural molecular weapons, its sequel may lean on the uncanny eye of artificial intelligence to screen millions of compounds—and even design new ones—in search of the next penicillin.
Hal Discovers a Powerful Antibiotic
In a paper published this week in the journal, Cell, MIT researchers took a step in this direction. The team says their machine learning algorithm discovered a powerful new antibiotic.
Named for the AI in 2001: A Space Odyssey, the antibiotic, halicin, successfully wiped out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria on the World Health Organization’s most wanted list. The bacteria also failed to develop resistance to E. coli during a month of observation, in stark contrast to existing antibiotic ciprofloxacin.
“In terms of antibiotic discovery, this is absolutely a first,” Regina Barzilay, a senior author on the study and computer science professor at MIT, told The Guardian.
The algorithm that discovered halicin was trained on the molecular features of 2,500 compounds. Nearly half were FDA-approved drugs, and another 800 naturally occurring. The researchers specifically tuned the algorithm to look for molecules with antibiotic properties but whose structures would differ from existing antibiotics (as halicin’s does). Using another machine learning program, they screened the results for those likely to be safe for humans.
Early study suggests halicin attacks the bacteria’s cell membranes, disrupting their ability to produce energy. Protecting the cell membrane from halicin might take more than one or two genetic mutations, which could account for its impressive ability to prevent resistance.
“I think this is one of the more powerful antibiotics that has been discovered to date,” James Collins, an MIT professor of bioengineering and senior author told The Guardian. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”
Beyond tests in petri-dish bacterial colonies, the team also tested halicin in mice. The antibiotic cleared up infections of a strain of bacteria resistant to all known antibiotics in a day. The team plans further study in partnership with a pharmaceutical company or nonprofit, and they hope to eventually prove it safe and effective for use in humans.
This last bit remains the trickiest step, given the cost of getting a new drug approved. But Collins hopes algorithms like theirs will help. “We could dramatically reduce the cost required to get through clinical trials,” he told the Financial Times.
A Universe of Drugs Awaits
The bigger story may be what happens next.
How many novel antibiotics await discovery, and how far can AI screening take us? The initial 6,000 compounds scanned by Barzilay and Collins’s team is a drop in the bucket.
They’ve already begun digging deeper by setting the algorithm loose on 100 million molecules from an online library of 1.5 billion compounds called the ZINC15 database. This first search took three days and turned up 23 more candidates that, like halicin, differ structurally from existing antibiotics and may be safe for humans. Two of these—which the team will study further—appear to be especially powerful.
Even more ambitiously, Barzilay hopes the approach can find or even design novel antibiotics that kill bad bacteria with alacrity while sparing the good guys. In this way, a round of antibiotics would cure whatever ails you without taking out your whole gut microbiome in the process.
All this is part of a larger movement to use machine learning algorithms in the long, expensive process of drug discovery. Other players in the area are also training AI on the vast possibility space of drug-like compounds. Last fall, one of the leaders in the area, Insilico, was challenged by a partner to see just how fast their method could do the job. The company turned out a new a proof-of-concept drug candidate in only 46 days.
The field is still developing, however, and it has yet to be seen exactly how valuable these approaches will be in practice. Barzilay is optimistic though.
“There is still a question of whether machine-learning tools are really doing something intelligent in healthcare, and how we can develop them to be workhorses in the pharmaceuticals industry,” she said. “This shows how far you can adapt this tool.”
Image Credit: Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not. Collins Lab at MIT Continue reading
#436546 How AI Helped Predict the Coronavirus ...
Coronavirus has been all over the news for the last couple weeks. A dedicated hospital sprang up in just eight days, the stock market took a hit, Chinese New Year celebrations were spoiled, and travel restrictions are in effect.
But let’s rewind a bit; some crucial events took place before we got to this point.
A little under two weeks before the World Health Organization (WHO) alerted the public of the coronavirus outbreak, a Canadian artificial intelligence company was already sounding the alarm. BlueDot uses AI-powered algorithms to analyze information from a multitude of sources to identify disease outbreaks and forecast how they may spread. On December 31st 2019, the company sent out a warning to its customers to avoid Wuhan, where the virus originated. The WHO didn’t send out a similar public notice until January 9th, 2020.
The story of BlueDot’s early warning is the latest example of how AI can improve our identification of and response to new virus outbreaks.
Predictions Are Bad News
Global pandemic or relatively minor scare? The jury is still out on the coronavirus. However, the math points to signs that the worst is yet to come.
Scientists are still working to determine how infectious the virus is. Initial analysis suggests it may be somewhere between influenza and polio on the virus reproduction number scale, which indicates how many new cases one case leads to.
UK and US-based researchers have published a preliminary paper estimating that the confirmed infected people in Wuhan only represent five percent of those who are actually infected. If the models are correct, 190,000 people in Wuhan will be infected by now, major Chinese cities are on the cusp of large-scale outbreaks, and the virus will continue to spread to other countries.
Finding the Start
The spread of a given virus is partly linked to how long it remains undetected. Identifying a new virus is the first step towards mobilizing a response and, in time, creating a vaccine. Warning at-risk populations as quickly as possible also helps with limiting the spread.
These are among the reasons why BlueDot’s achievement is important in and of itself. Furthermore, it illustrates how AIs can sift through vast troves of data to identify ongoing virus outbreaks.
BlueDot uses natural language processing and machine learning to scour a variety of information sources, including chomping through 100,000 news reports in 65 languages a day. Data is compared with flight records to help predict virus outbreak patterns. Once the automated data sifting is completed, epidemiologists check that the findings make sense from a scientific standpoint, and reports are sent to BlueDot’s customers, which include governments, businesses, and public health organizations.
AI for Virus Detection and Prevention
Other companies, such as Metabiota, are also using data-driven approaches to track the spread of the likes of the coronavirus.
Researchers have trained neural networks to predict the spread of infectious diseases in real time. Others are using AI algorithms to identify how preventive measures can have the greatest effect. AI is also being used to create new drugs, which we may well see repeated for the coronavirus.
If the work of scientists Barbara Han and David Redding comes to fruition, AI and machine learning may even help us predict where virus outbreaks are likely to strike—before they do.
The Uncertainty Factor
One of AI’s core strengths when working on identifying and limiting the effects of virus outbreaks is its incredibly insistent nature. AIs never tire, can sift through enormous amounts of data, and identify possible correlations and causations that humans can’t.
However, there are limits to AI’s ability to both identify virus outbreaks and predict how they will spread. Perhaps the best-known example comes from the neighboring field of big data analytics. At its launch, Google Flu Trends was heralded as a great leap forward in relation to identifying and estimating the spread of the flu—until it underestimated the 2013 flu season by a whopping 140 percent and was quietly put to rest.
Poor data quality was identified as one of the main reasons Google Flu Trends failed. Unreliable or faulty data can wreak havoc on the prediction power of AIs.
In our increasingly interconnected world, tracking the movements of potentially infected individuals (by car, trains, buses, or planes) is just one vector surrounded by a lot of uncertainty.
The fact that BlueDot was able to correctly identify the coronavirus, in part due to its AI technology, illustrates that smart computer systems can be incredibly useful in helping us navigate these uncertainties.
Importantly, though, this isn’t the same as AI being at a point where it unerringly does so on its own—which is why BlueDot employs human experts to validate the AI’s findings.
Image Credit: Coronavirus molecular illustration, Gianluca Tomasello/Wikimedia Commons Continue reading