Tag Archives: Artificial intelligence
As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.
At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.
But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?
Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.
Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)
Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.
Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).
There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.
Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:
The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.
We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?
And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?
Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.
Image Credit: d1sk / Shutterstock.com Continue reading
A Brain-Boosting Prosthesis Moves From Rats to Humans
Robbie Gonzalez | WIRED
“Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.”
Here’s How the US Needs to Prepare for the Age of Artificial Intelligence
Will Knight | MIT Technology Review
“The Trump administration has abandoned this vision and has no intention of devising its own AI plan, say those working there. They say there is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes… That looks like a huge mistake. If it essentially ignores such a technological transformation, the US might never make the most of an opportunity to reboot its economy and kick-start both wage growth and job creation. Failure to plan could also cause the birthplace of AI to lose ground to international rivals.”
Underwater GPS Inspired by Shrimp Eyes
Jeremy Hsu | IEEE Spectrum
“A few years ago, U.S. and Australian researchers developed a special camera inspired by the eyes of mantis shrimp that can see the polarization patterns of light waves, which resemble those in a rope being waved up and down. That means the bio-inspired camera can detect how light polarization patterns change once the light enters the water and gets deflected or scattered.”
POLITICS & TECHNOLOGY
‘The Business of War’: Google Employees Protest Work for the Pentagon
Scott Shane and Daisuke Wakabayashi | The New York Times
“Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.
The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. ‘We believe that Google should not be in the business of war,’ says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ‘ever build warfare technology.’ (Read the text of the letter.)”
MIT’s New Headset Reads the ‘Words in Your Head’
Brian Heater | TechCrunch
“A team at MIT has been working on just such a device, though the hardware design, admittedly, doesn’t go too far toward removing that whole self-consciousness bit from the equation. AlterEgo is a headmounted—or, more properly, jaw-mounted—device that’s capable of reading neuromuscular signals through built-in electrodes. The hardware, as MIT puts it, is capable of reading ‘words in your head.’”
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading
The artificial intelligence (AI) community has a clear message for researchers in South Korea: Don't make killer robots. Continue reading
In an interview at Singularity University’s Exponential Medicine in San Diego, Neil Jacobstein shared some groundbreaking developments in artificial intelligence for healthcare.
Jacobstein is Singularity University’s faculty chair in AI and robotics, a distinguished visiting scholar at Stanford University’s MediaX Program, and has served as an AI technical consultant on research and development projects for organizations like DARPA, Deloitte, NASA, Boeing, and many more.
According to Jacobstein, 2017 was an exciting year for AI, not only due to how the technology matured, but also thanks to new applications and successes in several health domains.
Among the examples cited in his interview, Jacobstein referenced a 2017 breakthrough at Stanford University where an AI system was used for skin cancer identification. To train the system, the team showed a convolutional neural network images of 129,000 skin lesions. The system was able to differentiate between images displaying malignant melanomas and benign skin lesions. When tested against 21 board–certified dermatologists, the system made comparable diagnostic calls.
Pattern recognition and image detection are just two examples of successful uses of AI in healthcare and medicine—the list goes on.
“We’re seeing AI and machine learning systems performing at narrow tasks remarkably well, and getting breakthrough results both in AI for problem-solving and AI with medicine,” Jacobstein said.
He continued, “We are not seeing super-human terminator systems. But we are seeing more members of the AI community paying attention to managing the downside risk of AI responsibly.”
Watch the full interview to learn more examples of how AI is advancing in healthcare and medicine and elsewhere and what Jacobstein thinks is coming next.
Image Credit: GrAI / Shutterstock.com Continue reading