Tag Archives: times

#436437 Why AI Will Be the Best Tool for ...

Dmitry Kaminskiy speaks as though he were trying to unload everything he knows about the science and economics of longevity—from senolytics research that seeks to stop aging cells from spewing inflammatory proteins and other molecules to the trillion-dollar life extension industry that he and his colleagues are trying to foster—in one sitting.

At the heart of the discussion with Singularity Hub is the idea that artificial intelligence will be the engine that drives breakthroughs in how we approach healthcare and healthy aging—a concept with little traction even just five years ago.

“At that time, it was considered too futuristic that artificial intelligence and data science … might be more accurate compared to any hypothesis of human doctors,” said Kaminskiy, co-founder and managing partner at Deep Knowledge Ventures, an investment firm that is betting big on AI and longevity.

How times have changed. Artificial intelligence in healthcare is attracting more investments and deals than just about any sector of the economy, according to data research firm CB Insights. In the most recent third quarter, AI healthcare startups raised nearly $1.6 billion, buoyed by a $550 million mega-round from London-based Babylon Health, which uses AI to collect data from patients, analyze the information, find comparable matches, then make recommendations.

Even without the big bump from Babylon Health, AI healthcare startups raised more than $1 billion last quarter, including two companies focused on longevity therapeutics: Juvenescence and Insilico Medicine.

The latter has risen to prominence for its novel use of reinforcement learning and general adversarial networks (GANs) to accelerate the drug discovery process. Insilico Medicine recently published a seminal paper that demonstrated how such an AI system could generate a drug candidate in just 46 days. Co-founder and CEO Alex Zhavoronkov said he believes there is no greater goal in healthcare today—or, really, any venture—than extending the healthy years of the human lifespan.

“I don’t think that there is anything more important than that,” he told Singularity Hub, explaining that an unhealthy society is detrimental to a healthy economy. “I think that it’s very, very important to extend healthy, productive lifespan just to fix the economy.”

An Aging Crisis
The surge of interest in longevity is coming at a time when life expectancy in the US is actually dropping, despite the fact that we spend more money on healthcare than any other nation.

A new paper in the Journal of the American Medical Association found that after six decades of gains, life expectancy for Americans has decreased since 2014, particularly among young and middle-aged adults. While some of the causes are societal, such as drug overdoses and suicide, others are health-related.

While average life expectancy in the US is 78, Kaminskiy noted that healthy life expectancy is about ten years less.

To Zhavoronkov’s point about the economy (a topic of great interest to Kaminskiy as well), the US spent $1.1 trillion on chronic diseases in 2016, according to a report from the Milken Institute, with diabetes, cardiovascular conditions, and Alzheimer’s among the most costly expenses to the healthcare system. When the indirect costs of lost economic productivity are included, the total price tag of chronic diseases in the US is $3.7 trillion, nearly 20 percent of GDP.

“So this is the major negative feedback on the national economy and creating a lot of negative social [and] financial issues,” Kaminskiy said.

Investing in Longevity
That has convinced Kaminskiy that an economy focused on extending healthy human lifespans—including the financial instruments and institutions required to support a long-lived population—is the best way forward.

He has co-authored a book on the topic with Margaretta Colangelo, another managing partner at Deep Knowledge Ventures, which has launched a specialized investment fund, Longevity.Capital, focused on the longevity industry. Kaminskiy estimates that there are now about 20 such investment funds dedicated to funding life extension companies.

In November at the inaugural AI for Longevity Summit in London, he and his collaborators also introduced the Longevity AI Consortium, an academic-industry initiative at King’s College London. Eventually, the research center will include an AI Longevity Accelerator program to serve as a bridge between startups and UK investors.

Deep Knowledge Ventures has committed about £7 million ($9 million) over the next three years to the accelerator program, as well as establishing similar consortiums in other regions of the world, according to Franco Cortese, a partner at Longevity.Capital and director of the Aging Analytics Agency, which has produced a series of reports on longevity.

A Cure for What Ages You
One of the most recent is an overview of Biomarkers for Longevity. A biomarker, in the case of longevity, is a measurable component of health that can indicate a disease state or a more general decline in health associated with aging. Examples range from something as simple as BMI as an indicator of obesity, which is associated with a number of chronic diseases, to sophisticated measurements of telomeres, the protective ends of chromosomes that shorten as we age.

While some researchers are working on moonshot therapies to reverse or slow aging—with a few even arguing we could expand human life on the order of centuries—Kaminskiy said he believes understanding biomarkers of aging could make more radical interventions unnecessary.

In this vision of healthcare, people would be able to monitor their health 24-7, with sensors attuned to various biomarkers that could indicate the onset of everything from the flu to diabetes. AI would be instrumental in not just ingesting the billions of data points required to develop such a system, but also what therapies, treatments, or micro-doses of a drug or supplement would be required to maintain homeostasis.

“Consider it like Tesla with many, many detectors, analyzing the behavior of the car in real time, and a cloud computing system monitoring those signals in real time with high frequency,” Kaminskiy explained. “So the same shall be applied for humans.”

And only sophisticated algorithms, Kaminskiy argued, can make longevity healthcare work on a mass scale but at the individual level. Precision medicine becomes preventive medicine. Healthcare truly becomes a system to support health rather than a way to fight disease.

Image Credit: Photo by h heyerlein on Unsplash Continue reading

Posted in Human Robots

#436403 Why Your 5G Phone Connection Could Mean ...

Will getting full bars on your 5G connection mean getting caught out by sudden weather changes?

The question may strike you as hypothetical, nonsensical even, but it is at the core of ongoing disputes between meteorologists and telecommunications companies. Everyone else, including you and I, are caught in the middle, wanting both 5G’s faster connection speeds and precise information about our increasingly unpredictable weather. So why can’t we have both?

Perhaps we can, but because of the way 5G networks function, it may take some special technology—specifically, artificial intelligence.

The Bandwidth Worries
Around the world, the first 5G networks are already being rolled out. The networks use a variety of frequencies to transmit data to and from devices at speeds up to 100 times faster than existing 4G networks.

One of the bandwidths used is between 24.25 and 24.45 gigahertz (GHz). In a recent FCC auction, telecommunications companies paid a combined $2 billion for the 5G usage rights for this spectrum in the US.

However, meteorologists are concerned that transmissions near the lower end of that range can interfere with their ability to accurately measure water vapor in the atmosphere. Wired reported that acting chief of the National Oceanic and Atmospheric Administration (NOAA), Neil Jacobs, told the US House Subcommittee on the Environment that 5G interference could substantially cut the amount of weather data satellites can gather. As a result, forecast accuracy could drop by as much as 30 percent.

Among the consequences could be less time to prepare for hurricanes, and it may become harder to predict storms’ paths. Due to the interconnectedness of weather patterns, measurement issues in one location can affect other areas too. Lack of accurate atmospheric data from the US could, for example, lead to less accurate forecasts for weather patterns over Europe.

The Numbers Game
Water vapor emits a faint signal at 23.8 GHz. Weather satellites measure the signals, and the data is used to gauge atmospheric humidity levels. Meteorologists have expressed concern that 5G signals in the same range can disturb those readings. The issue is that it would be nigh on impossible to tell whether a signal is water vapor or an errant 5G signal.

Furthermore, 5G disturbances in other frequency bands could make forecasting even more difficult. Rain and snow emit frequencies around 36-37 GHz. 50.2-50.4 GHz is used to measure atmospheric temperatures, and 86-92 GHz clouds and ice. All of the above are under consideration for international 5G signals. Some have warned that the wider consequences could set weather forecasts back to the 1980s.

Telecommunications companies and interest organizations have argued back, saying that weather sensors aren’t as susceptible to interference as meteorologists fear. Furthermore, 5G devices and signals will produce much less interference with weather forecasts than organizations like NOAA predict. Since very little scientific research has been carried out to examine the claims of either party, we seem stuck in a ‘wait and see’ situation.

To offset some of the possible effects, the two groups have tried to reach a consensus on a noise buffer between the 5G transmissions and water-vapor signals. It could be likened to limiting the noise from busy roads or loud sound systems to avoid bothering neighboring buildings.

The World Meteorological Organization was looking to establish a -55 decibel watts buffer. In Europe, regulators are locked in on a -42 decibel watts buffer for 5G base stations. For comparison, the US Federal Communications Commission has advocated for a -20 decibel watts buffer, which would, in reality, allow more than 150 times more noise than the European proposal.

How AI Could Help
Much of the conversation about 5G’s possible influence on future weather predictions is centered around mobile phones. However, the phones are far from the only systems that will be receiving and transmitting signals on 5G. Self-driving cars and the Internet of Things are two other technologies that could soon be heavily reliant on faster wireless signals.

Densely populated areas are likely going to be the biggest emitters of 5G signals, leading to a suggestion to only gather water-vapor data over oceans.

Another option is to develop artificial intelligence (AI) approaches to clean or process weather data. AI is playing an increasing role in weather forecasting. For example, in 2016 IBM bought The Weather Company for $2 billion. The goal was to combine the two companies’ models and data in IBM’s Watson to create more accurate forecasts. AI would also be able to predict increases or drops in business revenues due to weather changes. Monsanto has also been investing in AI for forecasting, in this case to provide agriculturally-related weather predictions.

Smartphones may also provide a piece of the weather forecasting puzzle. Studies have shown how data from thousands of smartphones can help to increase the accuracy of storm predictions, as well as the force of storms.

“Weather stations cost a lot of money,” Cliff Mass, an atmospheric scientist at the University of Washington in Seattle, told Inside Science, adding, “If there are already 20 million smartphones, you might as well take advantage of the observation system that’s already in place.”

Smartphones may not be the solution when it comes to finding new ways of gathering the atmospheric data on water vapor that 5G could disrupt. But it does go to show that some technologies open new doors, while at the same time, others shut them.

Image Credit: Image by Free-Photos from Pixabay Continue reading

Posted in Human Robots

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436218 An AI Debated Its Own Potential for Good ...

Artificial intelligence is going to overhaul the way we live and work. But will the changes it brings be for the better? As the technology slowly develops (let’s remember that right now, we’re still very much in the narrow AI space and nowhere near an artificial general intelligence), whether it will end up doing us more harm than good is a question at the top of everyone’s mind.

What kind of response might we get if we posed this question to an AI itself?

Last week at the Cambridge Union in England, IBM did just that. Its Project Debater (an AI that narrowly lost a debate to human debating champion Harish Natarajan in February) gave the opening arguments in a debate about the promise and peril of artificial intelligence.

Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument.

This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.

Debater grouped the arguments into five themes; the technology’s ability to take over dangerous or monotonous jobs was a pro-AI theme, and on the flip side was its potential to perpetuate the biases of its creators. “AI companies still have too little expertise on how to properly assess datasets and filter out bias,” the tall black box that houses Project Debater said. “AI will take human bias and will fixate it for generations.”
After Project Debater kicked off the debate by giving opening arguments for both sides, two teams of people took over, elaborating on its points and coming up with their own counter-arguments.

In the end, an audience poll voted in favor of the pro-AI side, but just barely; 51.2 percent of voters felt convinced that AI can help us more than it can hurt us.

The software’s natural language processing was able to identify racist, obscene, or otherwise inappropriate comments and weed them out as being irrelevant to the debate. But it also repeated the same arguments multiple times, and mixed up a statement about bias as being pro-AI rather than anti-AI.

IBM has been working on Project Debater for over six years, and though it aims to iron out small glitches like these, the system’s goal isn’t to ultimately outwit and defeat humans. On the contrary, the AI is meant to support our decision-making by taking in and processing huge amounts of information in a nuanced way, more quickly than we ever could.

IBM engineer Noam Slonim envisions Project Debater’s tech being used, for example, by a government seeking citizens’ feedback about a new policy. “This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.

As for the question of whether AI will do more good or harm, perhaps Sylvie Delacroix put it best. A professor of law and ethics at the University of Birmingham who argued on the pro-AI side of the debate, she pointed out that the impact AI will have depends on the way we design it, saying “AI is only as good as the data it has been fed.”

She’s right; rather than asking what sort of impact AI will have on humanity, we should start by asking what sort of impact we want it to have. The people working on AI—not AIs themselves—are ultimately responsible for how much good or harm will be done.

Image Credit: IBM Project Debater at Cambridge Union Society, photo courtesy of IBM Research Continue reading

Posted in Human Robots

#436188 The Blogger Behind “AI ...

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume?

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.”

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.”

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101.

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume.

Janelle Shane on . . .

The un-delicious origin of her blog
“The narrower the problem, the smarter the AI will seem”
Why overestimating AI is dangerous
Giraffing!
Machine and human creativity

The un-delicious origin of her blog IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI?
Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.
I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.
Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about?
Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all.
Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?
Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set.
BACK TO TOP↑ “The narrower the problem, the smarter the AI will seem” Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game.
Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem.
The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.
Spectrum: That sounds… disturbing.
Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”
BACK TO TOP↑ Why overestimating AI is dangerous Spectrum: Do you see it as your role to puncture the AI hype?
Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn't it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is.
Spectrum: If people overestimate the abilities of AI, what risk does that pose?
Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.”

“If the algorithm’s task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias.”
—Janelle Shane, AI Weirdness blogger
That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand.
If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias.
Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks?
Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is.
BACK TO TOP↑ Giraffing Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?
Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns.
Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?
Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks.
There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two.
BACK TO TOP↑ Machine and human creativity Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?
Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people.

The Halloween costume algorithm “was able to draw on its knowledge of which words are related to suggest things like sexy barnacle.”
—Janelle Shane, AI Weirdness blogger
Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd?
Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman.
Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested?
Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts.
BACK TO TOP↑ Continue reading

Posted in Human Robots