Tag Archives: something

#436578 AI Just Discovered a New Antibiotic to ...

Penicillin, one of the greatest discoveries in the history of medicine, was a product of chance.

After returning from summer vacation in September 1928, bacteriologist Alexander Fleming found a colony of bacteria he’d left in his London lab had sprouted a fungus. Curiously, wherever the bacteria contacted the fungus, their cell walls broke down and they died. Fleming guessed the fungus was secreting something lethal to the bacteria—and the rest is history.

Fleming’s discovery of penicillin and its later isolation, synthesis, and scaling in the 1940s released a flood of antibiotic discoveries in the next few decades. Bacteria and fungi had been waging an ancient war against each other, and the weapons they’d evolved over eons turned out to be humanity’s best defense against bacterial infection and disease.

In recent decades, however, the flood of new antibiotics has slowed to a trickle.

Their development is uneconomical for drug companies, and the low-hanging fruit has long been picked. We’re now facing the emergence of strains of super bacteria resistant to one or more antibiotics and an aging arsenal to fight them with. Gone unchallenged, an estimated 700,000 deaths worldwide due to drug resistance could rise to as many as 10 million in 2050.

Increasingly, scientists warn the tide is turning, and we need a new strategy to keep pace with the remarkably quick and boundlessly creative tactics of bacterial evolution.

But where the golden age of antibiotics was sparked by serendipity, human intelligence, and natural molecular weapons, its sequel may lean on the uncanny eye of artificial intelligence to screen millions of compounds—and even design new ones—in search of the next penicillin.

Hal Discovers a Powerful Antibiotic
In a paper published this week in the journal, Cell, MIT researchers took a step in this direction. The team says their machine learning algorithm discovered a powerful new antibiotic.

Named for the AI in 2001: A Space Odyssey, the antibiotic, halicin, successfully wiped out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria on the World Health Organization’s most wanted list. The bacteria also failed to develop resistance to E. coli during a month of observation, in stark contrast to existing antibiotic ciprofloxacin.

“In terms of antibiotic discovery, this is absolutely a first,” Regina Barzilay, a senior author on the study and computer science professor at MIT, told The Guardian.

The algorithm that discovered halicin was trained on the molecular features of 2,500 compounds. Nearly half were FDA-approved drugs, and another 800 naturally occurring. The researchers specifically tuned the algorithm to look for molecules with antibiotic properties but whose structures would differ from existing antibiotics (as halicin’s does). Using another machine learning program, they screened the results for those likely to be safe for humans.

Early study suggests halicin attacks the bacteria’s cell membranes, disrupting their ability to produce energy. Protecting the cell membrane from halicin might take more than one or two genetic mutations, which could account for its impressive ability to prevent resistance.

“I think this is one of the more powerful antibiotics that has been discovered to date,” James Collins, an MIT professor of bioengineering and senior author told The Guardian. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”

Beyond tests in petri-dish bacterial colonies, the team also tested halicin in mice. The antibiotic cleared up infections of a strain of bacteria resistant to all known antibiotics in a day. The team plans further study in partnership with a pharmaceutical company or nonprofit, and they hope to eventually prove it safe and effective for use in humans.

This last bit remains the trickiest step, given the cost of getting a new drug approved. But Collins hopes algorithms like theirs will help. “We could dramatically reduce the cost required to get through clinical trials,” he told the Financial Times.

A Universe of Drugs Awaits
The bigger story may be what happens next.

How many novel antibiotics await discovery, and how far can AI screening take us? The initial 6,000 compounds scanned by Barzilay and Collins’s team is a drop in the bucket.

They’ve already begun digging deeper by setting the algorithm loose on 100 million molecules from an online library of 1.5 billion compounds called the ZINC15 database. This first search took three days and turned up 23 more candidates that, like halicin, differ structurally from existing antibiotics and may be safe for humans. Two of these—which the team will study further—appear to be especially powerful.

Even more ambitiously, Barzilay hopes the approach can find or even design novel antibiotics that kill bad bacteria with alacrity while sparing the good guys. In this way, a round of antibiotics would cure whatever ails you without taking out your whole gut microbiome in the process.

All this is part of a larger movement to use machine learning algorithms in the long, expensive process of drug discovery. Other players in the area are also training AI on the vast possibility space of drug-like compounds. Last fall, one of the leaders in the area, Insilico, was challenged by a partner to see just how fast their method could do the job. The company turned out a new a proof-of-concept drug candidate in only 46 days.

The field is still developing, however, and it has yet to be seen exactly how valuable these approaches will be in practice. Barzilay is optimistic though.

“There is still a question of whether machine-learning tools are really doing something intelligent in healthcare, and how we can develop them to be workhorses in the pharmaceuticals industry,” she said. “This shows how far you can adapt this tool.”

Image Credit: Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not. Collins Lab at MIT Continue reading

Posted in Human Robots

#436573 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
The Messy, Secretive Reality Behind OpenAI’s Bid to Save the World
Karen Hao | MIT Technology Review
“The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism. …Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.”

ROBOTICS
3D Printed Four-Legged Robot Is Ready to Take on Spot—at a Lower Price
Luke Dormehl | Digital Trends
“[Ghost Robotics and Origin] have teamed up to develop a new line of robots, called the Spirit Series, which offer impressively capable four-legged robots, but which can be printed using additive manufacturing at a fraction of the cost and speed of traditional manufacturing approaches.”

PRIVACY
The Studs on This Punk Bracelet Are Actually Microphone-Jamming Ultrasonic Speakers
Andrew Liszewski | Gizmodo
“You can prevent facial recognition cameras from identifying you by wearing face paint, masks, or sometimes just a pair of oversized sunglasses. Keeping conversations private from an ever-growing number of microphone-equipped devices isn’t quite as easy, but researchers have created what could be the first wearable that actually helps increase your privacy.”

TRANSPORTATION
Iron Man Dreams Are Closer to Becoming a Reality Thanks to This New Jetman Dubai Video
Julia Alexander | The Verge
“Tony Stark may have destroyed his Iron Man suits in Iron Man 3 (only to bring out a whole new line in Avengers: Age of Ultron), but Jetman Dubai’s Iron Man-like dreams of autonomous human flight are realer than ever. A new video published by the company shows pilot Vince Reffet using a jet-powered, carbon-fiber suit to launch off the ground and fly 6,000 feet in the air.”

TECHNOLOGY
Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

SCIENCE
The Very Large Array Will Search for Evidence of Extraterrestrial Life
Georgina Torbet | Digital Trends
“To begin the project, an interface will be added to the NRAO’s Very Large Array (VLA) in New Mexico to search for events or structures which could indicate the presence of life, such as laser beams, structures built around stars, indications of constructed satellites, or atmospheric chemicals produced by industry.”

SCIENCE FICTION
The Terrible Truth About Star Trek’s Transporters
Cassidy Ward | SyFy Wire
“The fact that you are scanned, deconstructed, and rebuilt almost immediately thereafter only creates the illusion of continuity. In reality, you are killed and then something exactly like you is born, elsewhere. There’s a whole philosophical debate about whether this really matters. If the person constructed on the other end is identical to you, down to the atomic level, is there any measurable difference from it being actually you?”

Image Credit: Samuel Giacomelli / Unsplash Continue reading

Posted in Human Robots

#436559 This Is What an AI Said When Asked to ...

“What’s past is prologue.” So says the famed quote from Shakespeare’s The Tempest, alleging that we can look to what has already happened as an indication of what will happen next.

This idea could be interpreted as being rather bleak; are we doomed to repeat the errors of the past until we correct them? We certainly do need to learn and re-learn life lessons—whether in our work, relationships, finances, health, or other areas—in order to grow as people.

Zooming out, the same phenomenon exists on a much bigger scale—that of our collective human history. We like to think we’re improving as a species, but haven’t yet come close to doing away with the conflicts and injustices that plagued our ancestors.

Zooming back in (and lightening up) a little, what about the short-term future? What might happen over the course of this year, and what information would we use to make educated guesses about it?

The editorial team at The Economist took a unique approach to answering these questions. On top of their own projections for 2020, including possible scenarios in politics, economics, and the continued development of technologies like artificial intelligence, they looked to an AI to make predictions of its own. What it came up with is intriguing, and a little bit uncanny.

[For the full list of the questions and answers, read The Economist article].

An AI That Reads—Then Writes
Almost exactly a year ago, non-profit OpenAI announced it had built a neural network for natural language processing called GPT-2. The announcement was met with some controversy, as it included the caveat that the tool would not be immediately released to the public due to its potential for misuse. It was then released in phases over the course of several months.

GPT-2’s creators upped the bar on quality when training the neural net; rather than haphazardly feeding it low-quality text, they only used articles that got more than three upvotes on Reddit (admittedly, this doesn’t guarantee high quality across the board—but it’s something).

The training dataset consisted of 40GB of text. For context, 1GB of text is about 900,000 ASCII pages or 130,000 double-spaced Microsoft Word pages.

The tool has no understanding of the text it’s generating, of course. It uses language patterns and word sequences to draw statistical associations between words and phrases, building a sort of guidebook for itself (not unlike the grammar rules and vocabulary words you might study when trying to learn a foreign language). It then uses that guidebook to answer questions or predict what will come after a particular sequence of words.

GPT-2’s creators did note that when the model is trained on specifically-selected datasets for narrower applications, its replies become more convincing.

Some Light Editing
Though the Economist article detailing GPT-2’s predictions describes the text the model generated as “unedited,” this isn’t wholly accurate. In a Medium post, deputy editor Tom Standage describes the methodology he used, and it does involve a degree of tweaking.

Standage wrote an introductory paragraph, followed by his opening question (“Greetings, GPT-2. Have you done an interview before?”). He configured the tool to generate five responses to each subsequent question then selected the best answer, deleting any questions GPT-2 came up with as part of its response.

AI’s View of the World—and Itself
The questions Standage asked ranged from tech-related (the future of AI, technological unemployment, misuse of technology) to political and economic (re-election of Donald Trump, the US-China trade war). Here are a couple of his questions and GPT-2’s answers:

Q: Do fake news stories, generated using AI, pose a threat to democracy? Are we likely to see this tactic being used in the 2020 American presidential elections?
A: Yes, we’ll probably see them. It’s just a matter of when. Fake news stories are generally generated by political campaigns, and have the potential to have a huge impact on the outcome of an election. This is because they spread misinformation, often with no basis in fact.

Q: How do you see the economic outlook for 2020?
A: I think it is very unclear. The outlook for the UK economy is uncertain, but the American economy is very strong. I think we are going to have a lot of turbulence in the world economy.

While some of its answers seemed insightful—when asked about the future of AI, GPT-2 advised that we treat the technology like a tool and “put as much effort into developing [it] as necessary, rather than worrying that it’s going to harm us and destroy our lives”—many of the responses were quite vague. One answer, for example, mentions that there will “probably be major changes in China.” Are we talking political changes? Social? Economic? All of the above?

However, while nothing the AI wrote was particularly illuminating or new, it’s impressive that it was able to expound on the correct topic for each question, and in fully coherent English.

GPT-2 named itself—AI—as the most important disruptive technology to watch in 2020. This is telling; it’s coming from a synthesis of millions of pages of published articles. If GPT-2 thinks AI is more important than, say, gene editing or quantum computing, it could be right. Or it could simply be that AI is at peak hype, and as such more is being written about it than about other technologies.

Equally intriguing was GPT-2’s response when asked whether Donald Trump will win a second term: “I think he will not win a second term. I think he will be defeated in the general election.” Some deeper insight there would be great, but hey—we’ll take it.

Predicting Predictions
Since an AI can read and synthesize vast data sets much faster than we can, it’s being used to predict all kinds of things, from virus outbreaks to crime. But asking it to philosophize on the future based on the (Reddit-curated) past is new, and if you think about it, a pretty fascinating undertaking.

As GPT-2 and tools like it continually improve, we’ll likely see them making more—and better—predictions of the future. In the meantime, let’s hope that the new data these models are trained on—news of what’s happening this week, this month, this year—add to an already-present sense of optimism.

When asked if it had any advice for readers, GPT-2 replied, “The big projects that you think are impossible today are actually possible in the near future.”

Image Credit: Alexas_Fotos from Pixabay Continue reading

Posted in Human Robots

#436488 Tech’s Biggest Leaps From the Last 10 ...

As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years.

The 2010s saw IBM’s Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind’s AlphaGO becoming the world’s Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online.

For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.

As I did last year for 2018 only, I’ve asked a collection of experts across the Singularity University faculty to help frame the biggest breakthroughs and moments that gave shape to the past 10 years. I asked them what, in their opinion, was the most important breakthrough in their respective fields over the past decade.

My own answer to this question, focused in the space of augmented and virtual reality, would be the stunning announcement in March of 2014 that Facebook acquired Oculus VR for $2 billion. Although VR technology had been around for a while, it was at this precise moment that VR arrived as a consumer technology platform. Facebook, largely fueled by the singular interest of CEO Mark Zuckerberg, has funded the development of this industry, keeping alive the hope that consumer VR can become a sustainable business. In the meantime, VR has continued to grow in sophistication and usefulness, though it has yet to truly take off as a mainstream concept. That will hopefully be a development for the 2020s.

Below is a decade in review across the technology areas that are giving shape to our modern world, as described by the SU community of experts.

Digital Biology
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University

In my mind, this decade of astounding breakthroughs in the life sciences and medicine rests on the achievement of the $1,000 human genome in 2016. More-than-exponentially falling costs of DNA sequencing have driven advances in medicine, agriculture, ecology, genome editing, synthetic biology, the battle against climate change, and our fundamental understanding of life and its breathtaking connections. The “digital” revolution in DNA constituted an important model for harnessing other types of biological information, from personalized bio data to massive datasets spanning populations and species.

Crucially, by aggressively driving down the cost of such analyses, researchers and entrepreneurs democratized access to the source code of life—with attendant financial, cultural, and ethical consequences. Exciting, but take heed: Veritas Genetics spearheaded a $600 genome in 2019, only to have to shutter USA operations due to a money trail tangled with the trade war with China. Stay tuned through the early 2020s to see the pricing of DNA sequencing fall even further … and to experience the many ways that cheaper, faster harvesting of biological data will enrich your daily life.

Cryptocurrency
Alex Gladstein | Chief Strategy Officer, Human Rights Foundation

The past decade has seen Bitcoin go from just an idea on an obscure online message board to a global financial network carrying more than 100 billion dollars in value. And we’re just getting started. One recent defining moment in the cryptocurrency space has been a stunning trend underway in Venezuela, where today, the daily dollar-denominated value of Bitcoin traded now far exceeds the daily dollar-denominated value traded on the Caracas Stock Exchange. It’s just one country, but it’s a significant country, and a paradigm shift.

Governments and corporations are following Bitcoin’s success too, and are looking to launch their own digital currencies. China will launch its “DC/EP” project in the coming months, and Facebook is trying to kickstart its Libra project. There are technical and regulatory uncertainties for both, but one thing is for certain: the era of digital currency has arrived.

Business Strategy and Entrepreneurship
Pascal Finnette | Chair, Entrepreneurship and Open Innovation, Singularity University

For me, without a doubt, the most interesting and quite possibly ground-shifting development in the fields of entrepreneurship and corporate innovation in the last ten years is the rapid maturing of customer-driven product development frameworks such as Lean Startup, and its subsequent adoption by corporates for their own innovation purposes.

Tools and frameworks like the Business Model Canvas, agile (software) development and the aforementioned Lean Startup methodology fundamentally shifted the way we think and go about building products, services, and companies, with many of these tools bursting onto the startup scene in the late 2000s and early 2010s.

As these tools matured they found mass adoption not only in startups around the world, but incumbent companies who eagerly adopted them to increase their own innovation velocity and success.

Energy
Ramez Naam | Co-Chair, Energy and Environment, Singularity University

The 2010s were the decade that saw clean electricity, energy storage, and electric vehicles break through price and performance barriers around the world. Solar, wind, batteries, and EVs started this decade as technologies that had to be subsidized. That was the first phase of their existence. Now they’re entering their third, most disruptive phase, where shifting to clean energy and mobility is cheaper than continuing to use existing coal, gas, or oil infrastructure.

Consider that at the start of 2010, there was no place on earth where building new solar or wind was cheaper than building new coal or gas power generation. By 2015, in some of the sunniest and windiest places on earth, solar and wind had entered their second phase, where they were cost-competitive for new power. And then, in 2018 and 2019, we started to see the edge of the third phase, as building new solar and wind, in some parts of the world, was cheaper than operating existing coal or gas power plants.

Food Technology
Liz Specht, Ph. D | Associate Director of Science & Technology, The Good Food Institute

The arrival of mainstream plant-based meat is easily the food tech advance of the decade. Meat analogs have, of course, been around forever. But only in the last decade have companies like Beyond Meat and Impossible Foods decided to cut animals out of the process and build no-compromise meat directly from plants.

Plant-based meat is already transforming the fast-food industry. For example, the introduction of the Impossible Whopper led Burger King to their most profitable quarter in many years. But the global food industry as a whole is shifting as well. Tyson, JBS, Nestle, Cargill, and many others are all embracing plant-based meat.

Augmented and Virtual Reality
Jody Medich | CEO, Superhuman-x

The breakthrough moment for augmented and virtual reality came in 2013 when Palmer Lucky took apart an Android smartphone and added optic lenses to make the first version of the Oculus Rift. Prior to that moment, we struggled with miniaturizing the components needed to develop low-latency head-worn devices. But thanks to the smartphone race started in 2006 with the iPhone, we finally had a suite of sensors, chips, displays, and computing power small enough to put on the head.

What will the next 10 years bring? Look for AR/VR to explode in a big way. We are right on the cusp of that tipping point when the tech is finally “good enough” for our linear expectations. Given all it can do today, we can’t even picture what’s possible. Just as today we can’t function without our phones, by 2029 we’ll feel lost without some AR/VR product. It will be the way we interact with computing, smart objects, and AI. Tim Cook, Apple CEO, predicts it will replace all of today’s computing devices. I can’t wait.

Philosophy of Technology
Alix Rübsaam | Faculty Fellow, Singularity University, Philosophy of Technology/Ethics of AI

The last decade has seen a significant shift in our general attitude towards the algorithms that we now know dictate much of our surroundings. Looking back at the beginning of the decade, it seems we were blissfully unaware of how the data we freely and willingly surrendered would feed the algorithms that would come to shape every aspect of our daily lives: the news we consume, the products we purchase, the opinions we hold, etc.

If I were to isolate a single publication that contributed greatly to the shift in public discourse on algorithms, it would have to be Cathy O’Neil’s Weapons of Math Destruction from 2016. It remains a comprehensive, readable, and highly informative insight into how algorithms dictate our finances, our jobs, where we go to school, or if we can get health insurance. Its publication represents a pivotal moment when the general public started to question whether we should be OK with outsourcing decision making to these opaque systems.

The ubiquity of ethical guidelines for AI and algorithms published just in the last year (perhaps most comprehensively by the AI Now Institute) fully demonstrates the shift in public opinion of this decade.

Data Science
Ola Kowalewski | Faculty Fellow, Singularity University, Data Innovation

In the last decade we entered the era of internet and smartphone ubiquity. The number of internet users doubled, with nearly 60 percent of the global population connected online and now over 35 percent of the globe owns a smartphone. With billions of people in a state of constant connectedness and therefore in a state of constant surveillance, the companies that have built the tech infrastructure and information pipelines have dominated the global economy. This shift from tech companies being the underdogs to arguably the world’s major powers sets the landscape we enter for the next decade.

Global Grand Challenges
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University

The biggest breakthrough over the last decade in social impact and technology is that the social impact sector switched from seeing technology as something problematic to avoid, to one of the most effective ways to create social change. We now see people using exponential technologies to solve all sorts of social challenges in areas ranging from disaster response to hunger to shelter.

The world’s leading social organizations, such as UNICEF and the World Food Programme, have launched their own venture funds and accelerators, and the United Nations recently declared that digitization is revolutionizing global development.

Digital Biology
Raymond McCauley | Chair, Digital Biology, Singularity University, Co-Founder & Chief Architect, BioCurious; Principal, Exponential Biosciences

CRISPR is bringing about a revolution in genetic engineering. It’s obvious, and it’s huge. What may not be so obvious is the widespread adoption of genetic testing. And this may have an even longer-lasting effect. It’s used to test new babies, to solve medical mysteries, and to catch serial killers. Thanks to holiday ads from 23andMe and Ancestry.com, it’s everywhere. Testing your DNA is now a common over-the-counter product. People are using it to set their diet, to pick drugs, and even for dating (or at least picking healthy mates).

And we’re just in the early stages. Further down the line, doing large-scale studies on more people, with more data, will lead to the use of polygenic risk scores to help us rank our genetic potential for everything from getting cancer to being a genius. Can you imagine what it would be like for parents to pick new babies, GATTACA-style, to get the smartest kids? You don’t have to; it’s already happening.

Artificial Intelligence
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University

The convergence of exponentially improved computing power, the deep learning algorithm, and access to massive data resulted in a series of AI breakthroughs over the past decade. These included: vastly improved accuracy in identifying images, making self driving cars practical, beating several world champions in Go, and identifying gender, smoking status, and age from retinal fundus photographs.

Combined, these breakthroughs convinced researchers and investors that after 50+ years of research and development, AI was ready for prime-time applications. Now, virtually every field of human endeavor is being revolutionized by machine learning. We still have a long way to go to achieve human-level intelligence and beyond, but the pace of worldwide improvement is blistering.

Hod Lipson | Professor of Engineering and Data Science, Columbia University

The biggest moment in AI in the past decade (and in its entire history, in my humble opinion) was midnight, Pacific time, September 30, 2012: the moment when machines finally opened their eyes. It was the moment when deep learning took off, breaking stagnant decades of machine blindness, when AI couldn’t reliably tell apart even a cat from a dog. That seemingly trivial accomplishment—a task any one-year-old child can do—has had a ripple effect on AI applications from driverless cars to health diagnostics. And this is just the beginning of what is sure to be a Cambrian explosion of AI.

Neuroscience
Divya Chander | Chair, Neuroscience, Singularity University

If the 2000s were the decade of brain mapping, then the 2010s were the decade of brain writing. Optogenetics, a technique for precisely mapping and controlling neurons and neural circuits using genetically-directed light, saw incredible growth in the 2010s.

Also in the last 10 years, neuromodulation, or the ability to rewire the brain using both invasive and non-invasive interfaces and energy, has exploded in use and form. For instance, the Braingate consortium showed us how electrode arrays implanted into the motor cortex could be used by paralyzed people to use their thoughts to direct a robotic arm. These technologies, alone or in combination with robotics, exoskeletons, and flexible, implantable, electronics also make possible a future of human augmentation.

Image Credit: Image by Jorge Guillen from Pixabay Continue reading

Posted in Human Robots

#436484 If Machines Want to Make Art, Will ...

Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?

Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.

But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.

We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.

But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.

Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.

Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.

Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.

The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Rene Böhmer / Unsplash Continue reading

Posted in Human Robots