Tag Archives: human emotions

#439023 In ‘Klara and the Sun,’ We Glimpse ...

In a store in the center of an unnamed city, humanoid robots are displayed alongside housewares and magazines. They watch the fast-moving world outside the window, anxiously awaiting the arrival of customers who might buy them and take them home. Among them is Klara, a particularly astute robot who loves the sun and wants to learn as much as possible about humans and the world they live in.

So begins Kazuo Ishiguro’s new novel Klara and the Sun, published earlier this month. The book, told from Klara’s perspective, portrays an eerie future society in which intelligent machines and other advanced technologies have been integrated into daily life, but not everyone is happy about it.

Technological unemployment, the progress of artificial intelligence, inequality, the safety and ethics of gene editing, increasing loneliness and isolation—all of which we’re grappling with today—show up in Ishiguro’s world. It’s like he hit a fast-forward button, mirroring back to us how things might play out if we don’t approach these technologies with caution and foresight.

The wealthy genetically edit or “lift” their children to set them up for success, while the poor have to make do with the regular old brains and bodies bequeathed them by evolution. Lifted and unlifted kids generally don’t mix, and this is just one of many sinister delineations between a new breed of haves and have-nots.

There’s anger about robots’ steady infiltration into everyday life, and questions about how similar their rights should be to those of humans. “First they take the jobs. Then they take the seats at the theater?” one woman fumes.

References to “changes” and “substitutions” allude to an economy where automation has eliminated millions of jobs. While “post-employed” people squat in abandoned buildings and fringe communities arm themselves in preparation for conflict, those whose livelihoods haven’t been destroyed can afford to have live-in housekeepers and buy Artificial Friends (or AFs) for their lonely children.

“The old traditional model that we still live with now—where most of us can get some kind of paid work in exchange for our services or the goods we make—has broken down,” Ishiguro said in a podcast discussion of the novel. “We’re not talking just about the difference between rich and poor getting bigger. We’re talking about a gap appearing between people who participate in society in an obvious way and people who do not.”

He has a point; as much as techno-optimists claim that the economic changes brought by automation and AI will give us all more free time, let us work less, and devote time to our passion projects, how would that actually play out? What would millions of “post-employed” people receiving basic income actually do with their time and energy?

In the novel, we don’t get much of a glimpse of this side of the equation, but we do see how the wealthy live. After a long wait, just as the store manager seems ready to give up on selling her, Klara is chosen by a 14-year-old girl named Josie, the daughter of a woman who wears “high-rank clothes” and lives in a large, sunny home outside the city. Cheerful and kind, Josie suffers from an unspecified illness that periodically flares up and leaves her confined to her bed for days at a time.

Her life seems somewhat bleak, the need for an AF clear. In this future world, the children of the wealthy no longer go to school together, instead studying alone at home on their digital devices. “Interaction meetings” are set up for them to learn to socialize, their parents carefully eavesdropping from the next room and trying not to intervene when there’s conflict or hurt feelings.

Klara does her best to be a friend, aide, and confidante to Josie while continuing to learn about the world around her and decode the mysteries of human behavior. We surmise that she was programmed with a basic ability to understand emotions, which evolves along with her other types of intelligence. “I believe I have many feelings. The more I observe, the more feelings become available to me,” she explains to one character.

Ishiguro does an excellent job of representing Klara’s mind: a blend of pre-determined programming, observation, and continuous learning. Her narration has qualities both robotic and human; we can tell when something has been programmed in—she “Gives Privacy” to the humans around her when that’s appropriate, for example—and when she’s figured something out for herself.

But the author maintains some mystery around Klara’s inner emotional life. “Does she actually understand human emotions, or is she just observing human emotions and simulating them within herself?” he said. “I suppose the question comes back to, what are our emotions as human beings? What do they amount to?”

Klara is particularly attuned to human loneliness, since she essentially was made to help prevent it. It is, in her view, peoples’ biggest fear, and something they’ll go to great lengths to avoid, yet can never fully escape. “Perhaps all humans are lonely,” she says.

Warding off loneliness through technology isn’t a futuristic idea, it’s something we’ve been doing for a long time, with the technologies at hand growing more and more sophisticated. Products like AFs already exist. There’s XiaoIce, a chatbot that uses “sentiment analysis” to keep its 660 million users engaged, and Azuma Hikari, a character-based AI designed to “bring comfort” to users whose lives lack emotional connection with other humans.

The mere existence of these tools would be sinister if it wasn’t for their widespread adoption; when millions of people use AIs to fill a void in their lives, it raises deeper questions about our ability to connect with each other and whether technology is building it up or tearing it down.

This isn’t the only big question the novel tackles. An overarching theme is one we’ve been increasingly contemplating as computers start to acquire more complex capabilities, like the beginnings of creativity or emotional awareness: What is it that truly makes us human?

“Do you believe in the human heart?” one character asks. “I don’t mean simply the organ, obviously. I’m speaking in the poetic sense. The human heart. Do you think there is such a thing? Something that makes each of us special and individual?”

The alternative, at least in the story, is that people don’t have a unique essence, but rather we’re all a blend of traits and personalities that can be reduced to strings of code. Our understanding of the brain is still elementary, but at some level, doesn’t all human experience boil down to the firing of billions of neurons between our ears? Will we one day—in a future beyond that painted by Ishiguro, but certainly foreshadowed by it—be able to “decode” our humanity to the point that there’s nothing mysterious left about it? “A human heart is bound to be complex,” Klara says. “But it must be limited.”

Whether or not you agree, Klara and the Sun is worth the read. It’s both a marvelous, engaging story about what it means to love and be human, and a prescient warning to approach technological change with caution and nuance. We’re already living in a world where AI keeps us company, influences our behavior, and is wreaking various forms of havoc. Ishiguro’s novel is a snapshot of one of our possible futures, told through the eyes of a robot who keeps you rooting for her to the end.

Image Credit: Marion Wellmann from Pixabay Continue reading

Posted in Human Robots

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots

#436184 Why People Demanded Privacy to Confide ...

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)

Human: Men are all alike.

Computer: IN WHAT WAY?

Human: They're always bugging us about something or other.

Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?

Human: Well, my boyfriend made me come here.

Computer: YOUR BOYFRIEND MADE YOU COME HERE

Human: He says I'm depressed much of the time.

Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED

Human: It's true. I am unhappy.

Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.

To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.

During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.

Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.

For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.

But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.

The simulation of intelligence, rather than intelligence itself, was enough to fool people.

In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.

In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.

Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.

This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#429656 The evolution of robot-human interaction

Experience two generations of the development of robots able to interact with and respond to human emotions.

Posted in Human Robots

#429587 Zoe, the emotional talking head

Zoe is a digital avatar that has uncanny emotional, humanlike, expressions and emotions! She’s envisaged as the face of a personal assistant, or one day you’ll be able to create your own similar digital avatar to personalize your online and … Continue reading

Posted in Human Robots