Tag Archives: rather
#436218 An AI Debated Its Own Potential for Good ...
Artificial intelligence is going to overhaul the way we live and work. But will the changes it brings be for the better? As the technology slowly develops (let’s remember that right now, we’re still very much in the narrow AI space and nowhere near an artificial general intelligence), whether it will end up doing us more harm than good is a question at the top of everyone’s mind.
What kind of response might we get if we posed this question to an AI itself?
Last week at the Cambridge Union in England, IBM did just that. Its Project Debater (an AI that narrowly lost a debate to human debating champion Harish Natarajan in February) gave the opening arguments in a debate about the promise and peril of artificial intelligence.
Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument.
This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.
Debater grouped the arguments into five themes; the technology’s ability to take over dangerous or monotonous jobs was a pro-AI theme, and on the flip side was its potential to perpetuate the biases of its creators. “AI companies still have too little expertise on how to properly assess datasets and filter out bias,” the tall black box that houses Project Debater said. “AI will take human bias and will fixate it for generations.”
After Project Debater kicked off the debate by giving opening arguments for both sides, two teams of people took over, elaborating on its points and coming up with their own counter-arguments.
In the end, an audience poll voted in favor of the pro-AI side, but just barely; 51.2 percent of voters felt convinced that AI can help us more than it can hurt us.
The software’s natural language processing was able to identify racist, obscene, or otherwise inappropriate comments and weed them out as being irrelevant to the debate. But it also repeated the same arguments multiple times, and mixed up a statement about bias as being pro-AI rather than anti-AI.
IBM has been working on Project Debater for over six years, and though it aims to iron out small glitches like these, the system’s goal isn’t to ultimately outwit and defeat humans. On the contrary, the AI is meant to support our decision-making by taking in and processing huge amounts of information in a nuanced way, more quickly than we ever could.
IBM engineer Noam Slonim envisions Project Debater’s tech being used, for example, by a government seeking citizens’ feedback about a new policy. “This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.
As for the question of whether AI will do more good or harm, perhaps Sylvie Delacroix put it best. A professor of law and ethics at the University of Birmingham who argued on the pro-AI side of the debate, she pointed out that the impact AI will have depends on the way we design it, saying “AI is only as good as the data it has been fed.”
She’s right; rather than asking what sort of impact AI will have on humanity, we should start by asking what sort of impact we want it to have. The people working on AI—not AIs themselves—are ultimately responsible for how much good or harm will be done.
Image Credit: IBM Project Debater at Cambridge Union Society, photo courtesy of IBM Research Continue reading
#436200 AI and the Future of Work: The Economic ...
This week at MIT, academics and industry officials compared notes, studies, and predictions about AI and the future of work. During the discussions, an insurance company executive shared details about one AI program that rolled out at his firm earlier this year. A chatbot the company introduced, the executive said, now handles 150,000 calls per month.
Later in the day, a panelist—David Fanning, founder of PBS’s Frontline—remarked that this statistic is emblematic of broader fears he saw when reporting a new Frontline documentary about AI. “People are scared,” Fanning said of the public’s AI anxiety.
Fanning was part of a daylong symposium about AI’s economic consequences—good, bad, and otherwise—convened by MIT’s Task Force on the Work of the Future.
“Dig into every industry, and you’ll find AI changing the nature of work,” said Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). She cited recent McKinsey research that found 45 percent of the work people are paid to do today can be automated with currently available technologies. Those activities, McKinsey found, represent some US $2 trillion in wages.
However, the threat of automation—whether by AI or other technologies—isn’t as new as technologists on America’s coasts seem to believe, said panelist Fred Goff, CEO of Jobcase, Inc.
“If you live in Detroit or Toledo, where I come from, technology has been displacing jobs for the last half-century,” Goff said. “I don’t think that most people in this country have the increased anxiety that the coasts do, because they’ve been living this.”
Goff added that the challenge AI poses for the workforce is not, as he put it, “getting coal miners to code.” Rather, he said, as AI automates some jobs, it will also open opportunities for “reskilling” that may have nothing to do with AI or automation. He touted trade schools—teaching skills like welding, plumbing, and electrical work—and certification programs for sales industry software packages like Salesforce.
On the other hand, a documentarian who reported another recent program on AI—Krishna Andavolu, senior correspondent for Vice Media—said “reskilling” may not be an easy answer.
“People in rooms like this … don’t realize that a lot of people don’t want to work that much,” Andavolu said. “They’re not driven by passion for their career, they’re driven by passion for life. We’re telling a lot of these workers that they need to reskill. But to a lot of people that sounds like, ‘I’ve got to work twice as hard for what I have now.’ That sounds scary. We underestimate that at our peril.”
Part of the problem with “reskilling,” Andavolu said, is that some high-growth industries involve caregiving for seniors and in medical facilities—roles which are traditionally considered “feminized” careers. Destigmatizing these jobs, and increasing the pay to match the salaries of displaced jobs like long-haul truck drivers, is another challenge.
Daron Acemoglu, MIT Institute Professor of Economics, faulted the comparably slim funding of academic research into AI.
“There is nothing preordained about the progress of technology,” he said. Computers, the Internet, antibiotics, and sensors all grew out of government and academic research programs. What he called the “blue-sky thinking” of non-corporate AI research can also develop applications that are not purely focused on maximizing profits.
American companies, Acemoglu said, get tax breaks for capital R&D—but not for developing new technologies for their employees. “We turn around and [tell companies], ‘Use your technologies to empower workers,’” he said. “But why should they do that? Hiring workers is expensive in many ways. And we’re subsidizing capital.”
Said Sarita Gupta, director of the Ford Foundation’s Future of Work(ers) Program, “Low and middle income workers have for over 30 years been experiencing stagnant and declining pay, shrinking benefits, and less power on the job. Now technology is brilliant at enabling scale. But the question we sit with is—how do we make sure that we’re not scaling these longstanding problems?”
Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said AI may not reduce the number of jobs available in the workplace today. But the quality of those jobs is another story. He cited the Dutch economist Jan Tinbergen who decades ago said that “Inequality is a race between technology and education.”
McAfee said, ultimately, the time to solve the economic problems AI poses for workers in the United States is when the U.S. economy is doing well—like right now.
“We do have the wind at our backs,” said Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future.
“We have some breathing room right now,” McAfee agreed. “Economic growth has been pretty good. Unemployment is pretty low. Interest rates are very, very low. We might not have that war chest in the future.” Continue reading
#436190 What Is the Uncanny Valley?
Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?
Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.
By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.
Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.
Uncanny Valley: Definition and History
The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.
Image: Masahiro Mori
The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.
In his seminal essay for Japanese journal Energy, Mori wrote:
I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.
Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:
One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.
In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:
“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”
Uncanny Valley Examples
To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.
1. Telenoid
Photo: Hiroshi Ishiguro/Osaka University/ATR
Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.
2. Diego-san
Photo: Andrew Oh/Javier Movellan/Calit2
Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.
“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”
3. Geminoid HI
Photo: Osaka University/ATR/Kokoro
Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.
4. Sophia
Photo: Mikhail Tereshchenko/TASS/Getty Images
Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.
5. Anthropomorphized felines
The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.
Are you feeling that eerie sensation yet?
Uncanny Valley: Science or Pseudoscience?
Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.
Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:
Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.
Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.
“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”
How to Avoid the Uncanny Valley
Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.
To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.
For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”
But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.” Continue reading