Tag Archives: media
#436258 For Centuries, People Dreamed of a ...
This is part six of a six-part series on the history of natural language processing.
In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text.
But as is often the case with NLP technology, the tool held both great promise and great peril. Researchers and policy makers at the lab were concerned that their system, if widely released, could be exploited by bad actors and misappropriated for “malicious purposes.”
The people of OpenAI, which defines its mission as “discovering and enacting the path to safe artificial general intelligence,” were concerned that GPT-2 could be used to flood the Internet with fake text, thereby degrading an already fragile information ecosystem. For this reason, OpenAI decided that it would not release the full version of GPT-2 to the public or other researchers.
GPT-2 is an example of a technique in NLP called language modeling, whereby the computational system internalizes a statistical blueprint of a text so it’s able to mimic it. Just like the predictive text on your phone—which selects words based on words you’ve used before—GPT-2 can look at a string of text and then predict what the next word is likely to be based on the probabilities inherent in that text.
GPT-2 can be seen as a descendant of the statistical language modeling that the Russian mathematician A. A. Markov developed in the early 20th century (covered in part three of this series).
GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters.
What’s different with GPT-2, though, is the scale of the textual data modeled by the system. Whereas Markov analyzed a string of 20,000 letters to create a rudimentary model that could predict the likelihood of the next letter of a text being a consonant or a vowel, GPT-2 used 8 million articles scraped from Reddit to predict what the next word might be within that entire dataset.
And whereas Markov manually trained his model by counting only two parameters—vowels and consonants—GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters, burning through huge amounts of computational power in the process.
The results were impressive. In their blog post, OpenAI reported that GPT-2 could generate synthetic text in response to prompts, mimicking whatever style of text it was shown. If you prompt the system with a line of William Blake’s poetry, it can generate a line back in the Romantic poet’s style. If you prompt the system with a cake recipe, you get a newly invented recipe in response.
Perhaps the most compelling feature of GPT-2 is that it can answer questions accurately. For example, when OpenAI researchers asked the system, “Who wrote the book The Origin of Species?”—it responded: “Charles Darwin.” While only able to respond accurately some of the time, the feature does seem to be a limited realization of Gottfried Leibniz’s dream of a language-generating machine that could answer any and all human questions (described in part two of this series).
After observing the power of the new system in practice, OpenAI elected not to release the fully trained model. In the lead up to its release in February, there had been heightened awareness about “deepfakes”—synthetic images and videos, generated via machine learning techniques, in which people do and say things they haven’t really done and said. Researchers at OpenAI worried that GPT-2 could be used to essentially create deepfake text, making it harder for people to trust textual information online.
Responses to this decision varied. On one hand, OpenAI’s caution prompted an overblown reaction in the media, with articles about the “dangerous” technology feeding into the Frankenstein narrative that often surrounds developments in AI.
Others took issue with OpenAI’s self-promotion, with some even suggesting that OpenAI purposefully exaggerated GPT-2s power in order to create hype—while contravening a norm in the AI research community, where labs routinely share data, code, and pre-trained models. As machine learning researcher Zachary Lipton tweeted, “Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research.”
OpenAI stood by its decision to release only a limited version of GPT-2, but has since released larger models for other researchers and the public to experiment with. As yet, there has been no reported case of a widely distributed fake news article generated by the system. But there have been a number of interesting spin-off projects, including GPT-2 poetry and a webpage where you can prompt the system with questions yourself.
Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and
Star Wars movies.
There’s even a Reddit group populated entirely with text produced by GPT-2-powered bots. Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and Star Wars movies.
This bot-powered conversation may signify the new condition of life online, where language is increasingly created by a combination of human and non-human agents, and where maintaining the distinction between human and non-human, despite our best efforts, is increasingly difficult.
The idea of using rules, mechanisms, and algorithms to generate language has inspired people in many different cultures throughout history. But it’s in the online world that this powerful form of wordcraft may really find its natural milieu—in an environment where the identity of speakers becomes more ambiguous, and perhaps, less relevant. It remains to be seen what the consequences will be for language, communication, and our sense of human identity, which is so bound up with our ability to speak in natural language.
This is the sixth installment of a six-part series on the history of natural language processing. Last week’s post explained how an innocent Microsoft chatbot turned instantly racist on Twitter.
You can also check out our prior series on the untold history of AI. Continue reading
#436200 AI and the Future of Work: The Economic ...
This week at MIT, academics and industry officials compared notes, studies, and predictions about AI and the future of work. During the discussions, an insurance company executive shared details about one AI program that rolled out at his firm earlier this year. A chatbot the company introduced, the executive said, now handles 150,000 calls per month.
Later in the day, a panelist—David Fanning, founder of PBS’s Frontline—remarked that this statistic is emblematic of broader fears he saw when reporting a new Frontline documentary about AI. “People are scared,” Fanning said of the public’s AI anxiety.
Fanning was part of a daylong symposium about AI’s economic consequences—good, bad, and otherwise—convened by MIT’s Task Force on the Work of the Future.
“Dig into every industry, and you’ll find AI changing the nature of work,” said Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). She cited recent McKinsey research that found 45 percent of the work people are paid to do today can be automated with currently available technologies. Those activities, McKinsey found, represent some US $2 trillion in wages.
However, the threat of automation—whether by AI or other technologies—isn’t as new as technologists on America’s coasts seem to believe, said panelist Fred Goff, CEO of Jobcase, Inc.
“If you live in Detroit or Toledo, where I come from, technology has been displacing jobs for the last half-century,” Goff said. “I don’t think that most people in this country have the increased anxiety that the coasts do, because they’ve been living this.”
Goff added that the challenge AI poses for the workforce is not, as he put it, “getting coal miners to code.” Rather, he said, as AI automates some jobs, it will also open opportunities for “reskilling” that may have nothing to do with AI or automation. He touted trade schools—teaching skills like welding, plumbing, and electrical work—and certification programs for sales industry software packages like Salesforce.
On the other hand, a documentarian who reported another recent program on AI—Krishna Andavolu, senior correspondent for Vice Media—said “reskilling” may not be an easy answer.
“People in rooms like this … don’t realize that a lot of people don’t want to work that much,” Andavolu said. “They’re not driven by passion for their career, they’re driven by passion for life. We’re telling a lot of these workers that they need to reskill. But to a lot of people that sounds like, ‘I’ve got to work twice as hard for what I have now.’ That sounds scary. We underestimate that at our peril.”
Part of the problem with “reskilling,” Andavolu said, is that some high-growth industries involve caregiving for seniors and in medical facilities—roles which are traditionally considered “feminized” careers. Destigmatizing these jobs, and increasing the pay to match the salaries of displaced jobs like long-haul truck drivers, is another challenge.
Daron Acemoglu, MIT Institute Professor of Economics, faulted the comparably slim funding of academic research into AI.
“There is nothing preordained about the progress of technology,” he said. Computers, the Internet, antibiotics, and sensors all grew out of government and academic research programs. What he called the “blue-sky thinking” of non-corporate AI research can also develop applications that are not purely focused on maximizing profits.
American companies, Acemoglu said, get tax breaks for capital R&D—but not for developing new technologies for their employees. “We turn around and [tell companies], ‘Use your technologies to empower workers,’” he said. “But why should they do that? Hiring workers is expensive in many ways. And we’re subsidizing capital.”
Said Sarita Gupta, director of the Ford Foundation’s Future of Work(ers) Program, “Low and middle income workers have for over 30 years been experiencing stagnant and declining pay, shrinking benefits, and less power on the job. Now technology is brilliant at enabling scale. But the question we sit with is—how do we make sure that we’re not scaling these longstanding problems?”
Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said AI may not reduce the number of jobs available in the workplace today. But the quality of those jobs is another story. He cited the Dutch economist Jan Tinbergen who decades ago said that “Inequality is a race between technology and education.”
McAfee said, ultimately, the time to solve the economic problems AI poses for workers in the United States is when the U.S. economy is doing well—like right now.
“We do have the wind at our backs,” said Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future.
“We have some breathing room right now,” McAfee agreed. “Economic growth has been pretty good. Unemployment is pretty low. Interest rates are very, very low. We might not have that war chest in the future.” Continue reading
#435824 A Q&A with Cruise’s head of AI, ...
In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.
Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.
Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.
Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.
IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?
Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.
“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”
I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.
In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.
There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.
IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?
Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.
Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.
I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.
The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.
Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.
IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?
Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.
And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”
IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?
Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.
“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”
Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.
It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.
I can see how I can spend a lifetime career trying to solve this problem.
IEEE Spectrum: Why are you doing most of your development in San Francisco?
Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.
“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”
[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.
A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading
#435804 New AI Systems Are Here to Personalize ...
The narratives about automation and its impact on jobs go from urgent to hopeful and everything in between. Regardless where you land, it’s hard to argue against the idea that technologies like AI and robotics will change our economy and the nature of work in the coming years.
A recent World Economic Forum report noted that some estimates show automation could displace 75 million jobs by 2022, while at the same time creating 133 million new roles. While these estimates predict a net positive for the number of new jobs in the coming decade, displaced workers will need to learn new skills to adapt to the changes. If employees can’t be retrained quickly for jobs in the changing economy, society is likely to face some degree of turmoil.
According to Bryan Talebi, CEO and founder of AI education startup Ahura AI, the same technologies erasing and creating jobs can help workers bridge the gap between the two.
Ahura is developing a product to capture biometric data from adult learners who are using computers to complete online education programs. The goal is to feed this data to an AI system that can modify and adapt their program to optimize for the most effective teaching method.
While the prospect of a computer recording and scrutinizing a learner’s behavioral data will surely generate unease across a society growing more aware and uncomfortable with digital surveillance, some people may look past such discomfort if they experience improved learning outcomes. Users of the system would, in theory, have their own personalized instruction shaped specifically for their unique learning style.
And according to Talebi, their systems are showing some promise.
“Based on our early tests, our technology allows people to learn three to five times faster than traditional education,” Talebi told me.
Currently, Ahura’s system uses the video camera and microphone that come standard on the laptops, tablets, and mobile devices most students are using for their learning programs.
With the computer’s camera Ahura can capture facial movements and micro expressions, measure eye movements, and track fidget score (a measure of how much a student moves while learning). The microphone tracks voice sentiment, and the AI leverages natural language processing to review the learner’s word usage.
From this collection of data Ahura can, according to Talebi, identify the optimal way to deliver content to each individual.
For some users that might mean a video tutorial is the best style of learning, while others may benefit more from some form of experiential or text-based delivery.
“The goal is to alter the format of the content in real time to optimize for attention and retention of the information,” said Talebi. One of Ahura’s main goals is to reduce the frequency with which students switch from their learning program to distractions like social media.
“We can now predict with a 60 percent confidence interval ten seconds before someone switches over to Facebook or Instagram. There’s a lot of work to do to get that up to a 95 percent level, so I don’t want to overstate things, but that’s a promising indication that we can work to cut down on the amount of context-switching by our students,” Talebi said.
Talebi repeatedly mentioned his ambition to leverage the same design principles used by Facebook, Twitter, and others to increase the time users spend on those platforms, but instead use them to design more compelling and even addictive education programs that can compete for attention with social media.
But the notion that Ahura’s system could one day be used to create compelling or addictive education necessarily presses against a set of justified fears surrounding data privacy. Growing anxiety surrounding the potential to misuse user data for social manipulation is widespread.
“Of course there is a real danger, especially because we are collecting so much data about our users which is specifically connected to how they consume content. And because we are looking so closely at the ways people interact with content, it’s incredibly important that this technology never be used for propaganda or to sell things to people,” Talebi tried to assure me.
Unsurprisingly (and worrying), using this AI system to sell products to people is exactly where some investors’ ambitions immediately turn once they learn about the company’s capabilities, according to Talebi. During our discussion Talebi regularly cited the now infamous example of Cambridge Analytica, the political consulting firm hired by the Trump campaign to run a psychographically targeted persuasion campaign on the US population during the most recent presidential election.
“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.
Talebi will surely need to take real action on such a claim, but says the company is in the process of identifying a structure for an ethics review board—one that carries significant influence with similar voting authority as the executive team and the regular board.
“Our goal is to build an ethics review board that has teeth, is diverse in both gender and background but also in thought and belief structures. The idea is to have our ethics review panel ensure we’re building things ethically,” he said.
Data privacy appears to be an important issue for Talebi, who occasionally referenced a major competitor in the space based in China. According to a recent article from MIT Tech Review outlining the astonishing growth of AI-powered education platforms in China, data privacy concerns may be less severe there than in the West.
Ahura is currently developing upgrades to an early alpha-stage prototype, but is already capturing data from students from at least one Ivy League school and a variety of other places. Their next step is to roll out a working beta version to over 200,000 users as part of a partnership with an unnamed corporate client who will be measuring the platform’s efficacy against a control group.
Going forward, Ahura hopes to add to its suite of biometric data capture by including things like pupil dilation and facial flushing, heart rate, sleep patterns, or whatever else may give their system an edge in improving learning outcomes.
As information technologies increasingly automate work, it’s likely we’ll also see rapid changes to our labor systems. It’s also looking increasingly likely that those same technologies will be used to improve our ability to give people the right skills when they need them. It may be one way to address the challenges automation is sure to bring.
Image Credit: Gerd Altmann / Pixabay Continue reading