Tag Archives: something
#436578 AI Just Discovered a New Antibiotic to ...
Penicillin, one of the greatest discoveries in the history of medicine, was a product of chance.
After returning from summer vacation in September 1928, bacteriologist Alexander Fleming found a colony of bacteria he’d left in his London lab had sprouted a fungus. Curiously, wherever the bacteria contacted the fungus, their cell walls broke down and they died. Fleming guessed the fungus was secreting something lethal to the bacteria—and the rest is history.
Fleming’s discovery of penicillin and its later isolation, synthesis, and scaling in the 1940s released a flood of antibiotic discoveries in the next few decades. Bacteria and fungi had been waging an ancient war against each other, and the weapons they’d evolved over eons turned out to be humanity’s best defense against bacterial infection and disease.
In recent decades, however, the flood of new antibiotics has slowed to a trickle.
Their development is uneconomical for drug companies, and the low-hanging fruit has long been picked. We’re now facing the emergence of strains of super bacteria resistant to one or more antibiotics and an aging arsenal to fight them with. Gone unchallenged, an estimated 700,000 deaths worldwide due to drug resistance could rise to as many as 10 million in 2050.
Increasingly, scientists warn the tide is turning, and we need a new strategy to keep pace with the remarkably quick and boundlessly creative tactics of bacterial evolution.
But where the golden age of antibiotics was sparked by serendipity, human intelligence, and natural molecular weapons, its sequel may lean on the uncanny eye of artificial intelligence to screen millions of compounds—and even design new ones—in search of the next penicillin.
Hal Discovers a Powerful Antibiotic
In a paper published this week in the journal, Cell, MIT researchers took a step in this direction. The team says their machine learning algorithm discovered a powerful new antibiotic.
Named for the AI in 2001: A Space Odyssey, the antibiotic, halicin, successfully wiped out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria on the World Health Organization’s most wanted list. The bacteria also failed to develop resistance to E. coli during a month of observation, in stark contrast to existing antibiotic ciprofloxacin.
“In terms of antibiotic discovery, this is absolutely a first,” Regina Barzilay, a senior author on the study and computer science professor at MIT, told The Guardian.
The algorithm that discovered halicin was trained on the molecular features of 2,500 compounds. Nearly half were FDA-approved drugs, and another 800 naturally occurring. The researchers specifically tuned the algorithm to look for molecules with antibiotic properties but whose structures would differ from existing antibiotics (as halicin’s does). Using another machine learning program, they screened the results for those likely to be safe for humans.
Early study suggests halicin attacks the bacteria’s cell membranes, disrupting their ability to produce energy. Protecting the cell membrane from halicin might take more than one or two genetic mutations, which could account for its impressive ability to prevent resistance.
“I think this is one of the more powerful antibiotics that has been discovered to date,” James Collins, an MIT professor of bioengineering and senior author told The Guardian. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”
Beyond tests in petri-dish bacterial colonies, the team also tested halicin in mice. The antibiotic cleared up infections of a strain of bacteria resistant to all known antibiotics in a day. The team plans further study in partnership with a pharmaceutical company or nonprofit, and they hope to eventually prove it safe and effective for use in humans.
This last bit remains the trickiest step, given the cost of getting a new drug approved. But Collins hopes algorithms like theirs will help. “We could dramatically reduce the cost required to get through clinical trials,” he told the Financial Times.
A Universe of Drugs Awaits
The bigger story may be what happens next.
How many novel antibiotics await discovery, and how far can AI screening take us? The initial 6,000 compounds scanned by Barzilay and Collins’s team is a drop in the bucket.
They’ve already begun digging deeper by setting the algorithm loose on 100 million molecules from an online library of 1.5 billion compounds called the ZINC15 database. This first search took three days and turned up 23 more candidates that, like halicin, differ structurally from existing antibiotics and may be safe for humans. Two of these—which the team will study further—appear to be especially powerful.
Even more ambitiously, Barzilay hopes the approach can find or even design novel antibiotics that kill bad bacteria with alacrity while sparing the good guys. In this way, a round of antibiotics would cure whatever ails you without taking out your whole gut microbiome in the process.
All this is part of a larger movement to use machine learning algorithms in the long, expensive process of drug discovery. Other players in the area are also training AI on the vast possibility space of drug-like compounds. Last fall, one of the leaders in the area, Insilico, was challenged by a partner to see just how fast their method could do the job. The company turned out a new a proof-of-concept drug candidate in only 46 days.
The field is still developing, however, and it has yet to be seen exactly how valuable these approaches will be in practice. Barzilay is optimistic though.
“There is still a question of whether machine-learning tools are really doing something intelligent in healthcare, and how we can develop them to be workhorses in the pharmaceuticals industry,” she said. “This shows how far you can adapt this tool.”
Image Credit: Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not. Collins Lab at MIT Continue reading
#436484 If Machines Want to Make Art, Will ...
Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?
Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.
But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.
We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.
But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.
Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.
Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.
Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.
The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit: Rene Böhmer / Unsplash Continue reading