Tag Archives: did

#432487 Can We Make a Musical Turing Test?

As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.

At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.

But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?

Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.

Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)

Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.

Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).

There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.

Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:

The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.

We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?

And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?

Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.

Image Credit: d1sk / Shutterstock.com Continue reading

Posted in Human Robots

#432324 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
China Wants to Shape the Global Future of Artificial Intelligence
Will Knight | MIT Technology Review
“China’s booming AI industry and massive government investment in the technology have raised fears in the US and elsewhere that the nation will overtake international rivals in a fundamentally important technology. In truth, it may be possible for both the US and the Chinese economies to benefit from AI. But there may be more rivalry when it comes to influencing the spread of the technology worldwide. ‘I think this is the first technology area where China has a real chance to set the rules of the game,’ says Ding.”

SPACE
Astronaut’s Gene Expression No Longer Same as His Identical Twin, NASA Finds
Susan Scutti | CNN
“Preliminary results from NASA’s Twins Study reveal that 7% of astronaut Scott Kelly’s genetic expression—how his genes function within cells—did not return to baseline after his return to Earth two years ago. The study looks at what happened to Kelly before, during and after he spent one year aboard the International Space Station through an extensive comparison with his identical twin, Mark, who remained on Earth.”

3D PRINTING
This Cheap 3D-Printed Home Is a Start for the 1 Billion Who Lack Shelter
Tamara Warren | The Verge
“ICON has developed a method for printing a single-story 650-square-foot house out of cement in only 12 to 24 hours, a fraction of the time it takes for new construction. If all goes according to plan, a community made up of about 100 homes will be constructed for residents in El Salvador next year. The company has partnered with New Story, a nonprofit that is vested in international housing solutions. ‘We have been building homes for communities in Haiti, El Salvador, and Bolivia,’ Alexandria Lafci, co-founder of New Story, tells The Verge.”

SCIENCE
Our Microbiomes Are Making Scientists Question What It Means to Be Human
Rebecca Flowers | Motherboard
“Studies in genetics and Watson and Crick’s discovery of DNA gave more credence to the idea of individuality. But as scientists learn more about the microbiome, the idea of humans as a singular organism is being reconsidered: ‘There is now overwhelming evidence that normal development as well as the maintenance of the organism depend on the microorganisms…that we harbor,’ they state (others have taken this position, too).”

CULTURE
Stephen Hawking, Who Awed Both Scientists and the Public, Dies
Joe Palca | NPR
“Hawking was probably the best-known scientist in the world. He was a theoretical physicist whose early work on black holes transformed how scientists think about the nature of the universe. But his fame wasn’t just a result of his research. Hawking, who had a debilitating neurological disease that made it impossible for him to move his limbs or speak, was also a popular public figure and best-selling author. There was even a biopic about his life, The Theory of Everything, that won an Oscar for the actor, Eddie Redmayne, who portrayed Hawking.”

Image Credit: NASA/JPL-Caltech/STScI Continue reading

Posted in Human Robots

#432303 What If the AI Revolution Is Neither ...

Why does everyone assume that the AI revolution will either lead to a fiery apocalypse or a glorious utopia, and not something in between? Of course, part of this is down to the fact that you get more attention by saying “The end is nigh!” or “Utopia is coming!”

But part of it is down to how humans think about change, especially unprecedented change. Millenarianism doesn’t have anything to do with being a “millennial,” being born in the 90s and remembering Buffy the Vampire Slayer. It is a way of thinking about the future that involves a deeply ingrained sense of destiny. A definition might be: “Millenarianism is the expectation that the world as it is will be destroyed and replaced with a perfect world, that a redeemer will come to cast down the evil and raise up the righteous.”

Millenarian beliefs, then, intimately link together the ideas of destruction and creation. They involve the idea of a huge, apocalyptic, seismic shift that will destroy the fabric of the old world and create something entirely new. Similar belief systems exist in many of the world’s major religions, and also the unspoken religion of some atheists and agnostics, which is a belief in technology.

Look at some futurist beliefs around the technological Singularity. In Ray Kurzweil’s vision, the Singularity is the establishment of paradise. Everyone is rendered immortal by biotechnology that can cure our ills; our brains can be uploaded to the cloud; inequality and suffering wash away under the wave of these technologies. The “destruction of the world” is replaced by a Silicon Valley buzzword favorite: disruption. And, as with many millenarian beliefs, your mileage varies on whether this destruction paves the way for a new utopia—or simply ends the world.

There are good reasons to be skeptical and interrogative towards this way of thinking. The most compelling reason is probably that millenarian beliefs seem to be a default mode of how humans think about change; just look at how many variants of this belief have cropped up all over the world.

These beliefs are present in aspects of Christian theology, although they only really became mainstream in their modern form in the 19th and 20th centuries. Ideas like the Tribulations—many years of hardship and suffering—before the Rapture, when the righteous will be raised up and the evil punished. After this destruction, the world will be made anew, or humans will ascend to paradise.

Despite being dogmatically atheist, Marxism has many of the same beliefs. It is all about a deterministic view of history that builds to a crescendo. In the same way as Rapture-believers look for signs that prophecies are beginning to be fulfilled, so Marxists look for evidence that we’re in the late stages of capitalism. They believe that, inevitably, society will degrade and degenerate to a breaking point—just as some millenarian Christians do.

In Marxism, this is when the exploitation of the working class by the rich becomes unsustainable, and the working class bands together and overthrows the oppressors. The “tribulation” is replaced by a “revolution.” Sometimes revolutionary figures, like Lenin, or Marx himself, are heralded as messiahs who accelerate the onset of the Millennium; and their rhetoric involves utterly smashing the old system such that a new world can be built. Of course, there is judgment, when the righteous workers take what’s theirs and the evil bourgeoisie are destroyed.

Even Norse mythology has an element of this, as James Hughes points out in his essay in Nick Bostrom’s book Global Catastrophic Risks. Ragnarok involves men and gods being defeated in a final, apocalyptic battle—but because that was a little bleak, they add in the idea that a new earth will arise where the survivors will live in harmony.

Judgement day is a cultural trope, too. Take the ancient Egyptians and their beliefs around the afterlife; the Lord of the underworld, Osiris, weighs the mortal’s heart against a feather. “Should the heart of the deceased prove to be heavy with wrongdoing, it would be eaten by a demon, and the hope of an afterlife vanished.”

Perhaps in the Singularity, something similar goes on. As our technology and hence our power improve, a final reckoning approaches: our hearts, as humans, will be weighed against a feather. If they prove too heavy with wrongdoing—with misguided stupidity, with arrogance and hubris, with evil—then we will fail the test, and we will destroy ourselves. But if we pass, and emerge from the Singularity and all of its threats and promises unscathed, then we will have paradise. And, like the other belief systems, there’s no room for non-believers; all of society is going to be radically altered, whether you want it to be or not, whether it benefits you or leaves you behind. A technological rapture.

It almost seems like every major development provokes this response. Nuclear weapons did, too. Either this would prove the final straw and we’d destroy ourselves, or the nuclear energy could be harnessed to build a better world. People talked at the dawn of the nuclear age about electricity that was “too cheap to meter.” The scientists who worked on the bomb often thought that with such destructive power in human hands, we’d be forced to cooperate and work together as a species.

When we see the same response over and over again to different circumstances, cropping up in different areas, whether it’s science, religion, or politics, we need to consider human biases. We like millenarian beliefs; and so when the idea of artificial intelligence outstripping human intelligence emerges, these beliefs spring up around it.

We don’t love facts. We don’t love information. We aren’t as rational as we’d like to think. We are creatures of narrative. Physicists observe the world and we weave our observations into narrative theories, stories about little billiard balls whizzing around and hitting each other, or space and time that bend and curve and expand. Historians try to make sense of an endless stream of events. We rely on stories: stories that make sense of the past, justify the present, and prepare us for the future.

And as stories go, the millenarian narrative is a brilliant and compelling one. It can lead you towards social change, as in the case of the Communists, or the Buddhist uprisings in China. It can justify your present-day suffering, if you’re in the tribulation. It gives you hope that your life is important and has meaning. It gives you a sense that things are evolving in a specific direction, according to rules—not just randomly sprawling outwards in a chaotic way. It promises that the righteous will be saved and the wrongdoers will be punished, even if there is suffering along the way. And, ultimately, a lot of the time, the millenarian narrative promises paradise.

We need to be wary of the millenarian narrative when we’re considering technological developments and the Singularity and existential risks in general. Maybe this time is different, but we’ve cried wolf many times before. There is a more likely, less appealing story. Something along the lines of: there are many possibilities, none of them are inevitable, and lots of the outcomes are less extreme than you might think—or they might take far longer than you think to arrive. On the surface, it’s not satisfying. It’s so much easier to think of things as either signaling the end of the world or the dawn of a utopia—or possibly both at once. It’s a narrative we can get behind, a good story, and maybe, a nice dream.

But dig a little below the surface, and you’ll find that the millenarian beliefs aren’t always the most promising ones, because they remove human agency from the equation. If you think that, say, the malicious use of algorithms, or the control of superintelligent AI, are serious and urgent problems that are worth solving, you can’t be wedded to a belief system that insists utopia or dystopia are inevitable. You have to believe in the shades of grey—and in your own ability to influence where we might end up. As we move into an uncertain technological future, we need to be aware of the power—and the limitations—of dreams.

Image Credit: Photobank gallery / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#432236 Why Hasn’t AI Mastered Language ...

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com Continue reading

Posted in Human Robots

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots