Tag Archives: build
#436202 Trump CTO Addresses AI, Facial ...
Michael Kratsios, the Chief Technology Officer of the United States, took the stage at Stanford University last week to field questions from Stanford’s Eileen Donahoe and attendees at the 2019 Fall Conference of the Institute for Human-Centered Artificial Intelligence (HAI).
Kratsios, the fourth to hold the U.S. CTO position since its creation by President Barack Obama in 2009, was confirmed in August as President Donald Trump’s first CTO. Before joining the Trump administration, he was chief of staff at investment firm Thiel Capital and chief financial officer of hedge fund Clarium Capital. Donahoe is Executive Director of Stanford’s Global Digital Policy Incubator and served as the first U.S. Ambassador to the United Nations Human Rights Council during the Obama Administration.
The conversation jumped around, hitting on both accomplishments and controversies. Kratsios touted the administration’s success in fixing policy around the use of drones, its memorandum on STEM education, and an increase in funding for basic research in AI—though the magnitude of that increase wasn’t specified. He pointed out that the Trump administration’s AI policy has been a continuation of the policies of the Obama administration, and will continue to build on that foundation. As proof of this, he pointed to Trump’s signing of the American AI Initiative earlier this year. That executive order, Kratsios said, was intended to bring various government agencies together to coordinate their AI efforts and to push the idea that AI is a tool for the American worker. The AI Initiative, he noted, also took into consideration that AI will cause job displacement, and asked private companies to pledge to retrain workers.
The administration, he said, is also looking to remove barriers to AI innovation. In service of that goal, the government will, in the next month or so, release a regulatory guidance memo instructing government agencies about “how they should think about AI technologies,” said Kratsios.
U.S. vs China in AI
A few of the exchanges between Kratsios and Donahoe hit on current hot topics, starting with the tension between the U.S. and China.
Donahoe:
“You talk a lot about unique U.S. ecosystem. In which aspect of AI is the U.S. dominant, and where is China challenging us in dominance?
Kratsios:
“They are challenging us on machine vision. They have more data to work with, given that they have surveillance data.”
Donahoe:
“To what extent would you say the quantity of data collected and available will be a determining factor in AI dominance?”
Kratsios:
“It makes a big difference in the short term. But we do research on how we get over these data humps. There is a future where you don’t need as much data, a lot of federal grants are going to [research in] how you can train models using less data.”
Donahoe turned the conversation to a different tension—that between innovation and values.
Donahoe:
“A lot of conversation yesterday was about the tension between innovation and values, and how do you hold those things together and lead in both realms.”
Kratsios:
“We recognized that the U.S. hadn’t signed on to principles around developing AI. In May, we signed [the Organization for Economic Cooperation and Development Principles on Artificial Intelligence], coming together with other Western democracies to say that these are values that we hold dear.
[Meanwhile,] we have adversaries around the world using AI to surveil people, to suppress human rights. That is why American leadership is so critical: We want to come out with the next great product. And we want our values to underpin the use cases.”
A member of the audience pushed further:
“Maintaining U.S. leadership in AI might have costs in terms of individuals and society. What costs should individuals and society bear to maintain leadership?”
Kratsios:
“I don’t view the world that way. Our companies big and small do not hesitate to talk about the values that underpin their technology. [That is] markedly different from the way our adversaries think. The alternatives are so dire [that we] need to push efforts to bake the values that we hold dear into this technology.”
Facial recognition
And then the conversation turned to the use of AI for facial recognition, an application which (at least for police and other government agencies) was recently banned in San Francisco.
Donahoe:
“Some private sector companies have called for government regulation of facial recognition, and there already are some instances of local governments regulating it. Do you expect federal regulation of facial recognition anytime soon? If not, what ought the parameters be?”
Kratsios:
“A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”
A member of the audience followed up on that topic, referring to some data presented earlier at the HAI conference on bias in AI:
“Frequently the example of finding missing children is given as the example of why we should not restrict use of facial recognition. But we saw Joy Buolamwini’s presentation on bias in data. I would like to hear your thoughts about how government thinks we should use facial recognition, knowing about this bias.”
Kratsios:
“Fairness, accountability, and robustness are things we want to bake into any technology—not just facial recognition—as we build rules governing use cases.”
Immigration and innovation
A member of the audience brought up the issue of immigration:
“One major pillar of innovation is immigration, does your office advocate for it?”
Kratsios:
“Our office pushes for best and brightest people from around the world to come to work here and study here. There are a few efforts we have made to move towards a more merit-based immigration system, without congressional action. [For example, in] the H1-B visa system, you go through two lotteries. We switched the order of them in order to get more people with advanced degrees through.”
The government’s tech infrastructure
Donahoe brought the conversation around to the tech infrastructure of the government itself:
“We talk about the shiny object, AI, but the 80 percent is the unsexy stuff, at federal and state levels. We don’t have a modern digital infrastructure to enable all the services—like a research cloud. How do we create this digital infrastructure?”
Kratsios:
“I couldn’t agree more; the least partisan issue in Washington is about modernizing IT infrastructure. We spend like $85 billion a year on IT at the federal level, we can certainly do a better job of using those dollars.” Continue reading
#436184 Why People Demanded Privacy to Confide ...
This is part four of a six-part series on the history of natural language processing.
Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].
While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”
He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.
Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.
To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)
Human: Men are all alike.
Computer: IN WHAT WAY?
Human: They're always bugging us about something or other.
Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?
Human: Well, my boyfriend made me come here.
Computer: YOUR BOYFRIEND MADE YOU COME HERE
Human: He says I'm depressed much of the time.
Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED
Human: It's true. I am unhappy.
Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.
To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.
During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.
Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.
For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.
But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.
The simulation of intelligence, rather than intelligence itself, was enough to fool people.
In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.
In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.
Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.
This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”
You can also check out our prior series on the untold history of AI. Continue reading
#436167 Is it Time for Tech to Stop Moving Fast ...
On Monday, I attended the 2019 Fall Conference of Stanford’s Institute for Human Centered Artificial Intelligence (HAI). That same night I watched the Season 6 opener for the HBO TV show Silicon Valley. And the debates featured in both surrounded the responsibility of tech companies for the societal effects of the technologies they produce. The two events have jumbled together in my mind, perhaps because I was in a bit of a brain fog, thanks to the nasty combination of a head cold and the smoke that descended on Silicon Valley from the northern California wildfires. But perhaps that mixture turned out to be a good thing.
What is clear, in spite of the smoke, is that this issue is something a lot of people are talking about, inside and outside of Silicon Valley (witness the viral video of Rep. Alexandria Ocasio-Cortez (D-NY) grilling Facebook CEO Mark Zuckerberg).
So, to add to that conversation, here’s my HBO Silicon Valley/Stanford HAI conference mashup.
Silicon Valley’s fictional CEO Richard Hendriks, in the opening scene of the episode, tells Congress that Facebook, Google, and Amazon only care about exploiting personal data for profit. He states:
“These companies are kings, and they rule over kingdoms far larger than any nation in history.”
Meanwhile Marietje Schaake, former member of the European Parliament and a fellow at HAI, told the conference audience of 900:
“There is a lot of power in the hands of few actors—Facebook decides who is a news source, Microsoft will run the defense department’s cloud…. I believe we need a deeper debate about which tasks need to stay in the hands of the public.”
Eric Schmidt, former CEO and executive chairman of Google, agreed. He says:
“It is important that we debate now the ethics of what we are doing, and the impact of the technology that we are building.”
Stanford Associate Professor Ge Wang, also speaking at the HAI conference, pointed out:
“‘Doing no harm’ is a vital goal, and it is not easy. But it is different from a proactive goal, to ‘do good.’”
Had Silicon Valley’s Hendricks been there, he would have agreed. He said in the episode:
“Just because it’s successful, doesn’t mean it’s good. Hiroshima was a successful implementation.”
The speakers at the HAI conference discussed the implications of moving fast and breaking things, of putting untested and unregulated technology into the world now that we know that things like public trust and even democracy can be broken.
Google’s Schmidt told the HAI audience:
“I don’t think that everything that is possible should be put into the wild in society, we should answer the question, collectively, how much risk are we willing to take.
And Silicon Valley denizens real and fictional no longer think it’s OK to just say sorry afterwards. Says Schmidt:
“When you ask Facebook about various scandals, how can they still say ‘We are very sorry; we have a lot of learning to do.’ This kind of naiveté stands out of proportion to the power tech companies have. With great power should come great responsibility, or at least modesty.”
Schaake argued:
“We need more guarantees, institutions, and policies than stated good intentions. It’s about more than promises.”
Fictional CEO Hendricks thinks saying sorry is a cop-out as well. In the episode, a developer admits that his app collected user data in spite of Hendricks assuring Congress that his company doesn’t do that:
“You didn’t know at the time,” the developer says. “Don’t beat yourself up about it. But in the future, stop saying it. Or don’t; I don’t care. Maybe it will be like Google saying ‘Don’t be evil,’ or Facebook saying ‘I’m sorry, we’ll do better.’”
Hendricks doesn’t buy it:
“This stops now. I’m the boss, and this is over.”
(Well, he is fictional.)
How can government, the tech world, and the general public address this in a more comprehensive way? Out in the real world, the “what to do” discussion at Stanford HAI surrounded regulation—how much, what kind, and when.
Says the European Parliament’s Schaake:
“An often-heard argument is that government should refrain from regulating tech because [regulation] will stifle innovation. [That argument] implies that innovation is more important than democracy or the rule of law. Our problems don’t stem from over regulation, but under regulation of technologies.”
But when should that regulation happen. Stanford provost emeritus John Etchemendy, speaking from the audience at the HAI conference, said:
“I’ve been an advocate of not trying to regulate before you understand it. Like San Francisco banning of use of facial recognition is not a good example of regulation; there are uses of facial recognition that we should allow. We want regulations that are just right, that prevent the bad things and allow the good things. So we are going to get it wrong either way, if we regulate to soon or hold off, we will get some things wrong.”
Schaake would opt for regulating sooner rather than later. She says that she often hears the argument that it is too early to regulate artificial intelligence—as well as the argument that it is too late to regulate ad-based political advertising, or online privacy. Neither, to her, makes sense. She told the HAI attendees:
“We need more than guarantees than stated good intentions.”
U.S. Chief Technology Officer Michael Kratsios would go with later rather than sooner. (And, yes, the country has a CTO. President Barack Obama created the position in 2009; Kratsios is the fourth to hold the office and the first under President Donald Trump. He was confirmed in August.) Also speaking at the HAI conference, Kratsios argued:
“I don’t think we should be running to regulate anything. We are a leader [in technology] not because we had great regulations, but we have taken a free market approach. We have done great in driving innovation in technologies that are born free, like the Internet. Technologies born in captivity, like autonomous vehicles, lag behind.”
In the fictional world of HBO’s Silicon Valley, startup founder Hendricks has a solution—a technical one of course: the decentralized Internet. He tells Congress:
“The way we win is by creating a new, decentralized Internet, one where the behavior of companies like this will be impossible, forever. Where it is the users, not the kings, who have sovereign control over their data. I will help you build an Internet that is of the people, by the people, and for the people.”
(This is not a fictional concept, though it is a long way from wide use. Also called the decentralized Web, the concept takes the content on today’s Web and fragments it, and then replicates and scatters those fragments to hosts around the world, increasing privacy and reducing the ability of governments to restrict access.)
If neither regulation nor technology comes to make the world safe from the unforeseen effects of new technologies, there is one more hope, according to Schaake: the millennials and subsequent generations.
Tech companies can no longer pursue growth at all costs, not if they want to keep attracting the talent they need, says Schaake. She noted that, “the young generation looks at the environment, at homeless on the streets,” and they expect their companies to tackle those and other issues and make the world a better place. Continue reading
#436140 Let’s Build Robots That Are as Smart ...
Illustration: Nicholas Little
Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.
Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.
But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.
Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.
Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.
Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.
Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.
When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.
The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.
Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.
This article appears in the November 2019 print issue as “Robots as Smart as Babies.” Continue reading