Tag Archives: good

#434623 The Great Myth of the AI Skills Gap

One of the most contentious debates in technology is around the question of automation and jobs. At issue is whether advances in automation, specifically with regards to artificial intelligence and robotics, will spell trouble for today’s workers. This debate is played out in the media daily, and passions run deep on both sides of the issue. In the past, however, automation has created jobs and increased real wages.

A widespread concern with the current scenario is that the workers most likely to be displaced by technology lack the skills needed to do the new jobs that same technology will create.

Let’s look at this concern in detail. Those who fear automation will hurt workers start by pointing out that there is a wide range of jobs, from low-pay, low-skill to high-pay, high-skill ones. This can be represented as follows:

They then point out that technology primarily creates high-paying jobs, like geneticists, as shown in the diagram below.

Meanwhile, technology destroys low-wage, low-skill jobs like those in fast food restaurants, as shown below:

Then, those who are worried about this dynamic often pose the question, “Do you really think a fast-food worker is going to become a geneticist?”

They worry that we are about to face a huge amount of systemic permanent unemployment, as the unskilled displaced workers are ill-equipped to do the jobs of tomorrow.

It is important to note that both sides of the debate are in agreement at this point. Unquestionably, technology destroys low-skilled, low-paying jobs while creating high-skilled, high-paying ones.

So, is that the end of the story? As a society are we destined to bifurcate into two groups, those who have training and earn high salaries in the new jobs, and those with less training who see their jobs vanishing to machines? Is this latter group forever locked out of economic plenty because they lack training?

No.

The question, “Can a fast food worker become a geneticist?” is where the error comes in. Fast food workers don’t become geneticists. What happens is that a college biology professor becomes a geneticist. Then a high-school biology teacher gets the college job. Then the substitute teacher gets hired on full-time to fill the high school teaching job. All the way down.

The question is not whether those in the lowest-skilled jobs can do the high-skilled work. Instead the question is, “Can everyone do a job just a little harder than the job they have today?” If so, and I believe very deeply that this is the case, then every time technology creates a new job “at the top,” everyone gets a promotion.

This isn’t just an academic theory—it’s 200 years of economic history in the west. For 200 years, with the exception of the Great Depression, unemployment in the US has been between 2 percent and 13 percent. Always. Europe’s range is a bit wider, but not much.

If I took 200 years of unemployment rates and graphed them, and asked you to find where the assembly line took over manufacturing, or where steam power rapidly replaced animal power, or the lightning-fast adoption of electricity by industry, you wouldn’t be able to find those spots. They aren’t even blips in the unemployment record.

You don’t even have to look back as far as the assembly line to see this happening. It has happened non-stop for 200 years. Every fifty years, we lose about half of all jobs, and this has been pretty steady since 1800.

How is it that for 200 years we have lost half of all jobs every half century, but never has this process caused unemployment? Not only has it not caused unemployment, but during that time, we have had full employment against the backdrop of rising wages.

How can wages rise while half of all jobs are constantly being destroyed? Simple. Because new technology always increases worker productivity. It creates new jobs, like web designer and programmer, while destroying low-wage backbreaking work. When this happens, everyone along the way gets a better job.

Our current situation isn’t any different than the past. The nature of technology has always been to create high-skilled jobs and increase worker productivity. This is good news for everyone.

People often ask me what their children should study to make sure they have a job in the future. I usually say it doesn’t really matter. If I knew everything I know now and went back to the mid 1980s, what could I have taken in high school to make me better prepared for today? There is only one class, and it wasn’t computer science. It was typing. Who would have guessed?

The great skill is to be able to learn new things, and luckily, we all have that. In fact, that is our singular ability as a species. What I do in my day-to-day job consists largely of skills I have learned as the years have passed. In my experience, if you ask people at all job levels,“Would you like a little more challenging job to make a little more money?” almost everyone says yes.

That’s all it has taken for us to collectively get here today, and that’s all we need going forward.

Image Credit: Lightspring / Shutterstock.com Continue reading

Posted in Human Robots

#434616 What Games Are Humans Still Better at ...

Artificial intelligence (AI) systems’ rapid advances are continually crossing rows off the list of things humans do better than our computer compatriots.

AI has bested us at board games like chess and Go, and set astronomically high scores in classic computer games like Ms. Pacman. More complex games form part of AI’s next frontier.

While a team of AI bots developed by OpenAI, known as the OpenAI Five, ultimately lost to a team of professional players last year, they have since been running rampant against human opponents in Dota 2. Not to be outdone, Google’s DeepMind AI recently took on—and beat—several professional players at StarCraft II.

These victories beg the questions: what games are humans still better at than AI? And for how long?

The Making Of AlphaStar
DeepMind’s results provide a good starting point in a search for answers. The version of its AI for StarCraft II, dubbed AlphaStar, learned to play the games through supervised learning and reinforcement learning.

First, AI agents were trained by analyzing and copying human players, learning basic strategies. The initial agents then played each other in a sort of virtual death match where the strongest agents stayed on. New iterations of the agents were developed and entered the competition. Over time, the agents became better and better at the game, learning new strategies and tactics along the way.

One of the advantages of AI is that it can go through this kind of process at superspeed and quickly develop better agents. DeepMind researchers estimate that the AlphaStar agents went through the equivalent of roughly 200 years of game time in about 14 days.

Cheating or One Hand Behind the Back?
The AlphaStar AI agents faced off against human professional players in a series of games streamed on YouTube and Twitch. The AIs trounced their human opponents, winning ten games on the trot, before pro player Grzegorz “MaNa” Komincz managed to salvage some pride for humanity by winning the final game. Experts commenting on AlphaStar’s performance used words like “phenomenal” and “superhuman”—which was, to a degree, where things got a bit problematic.

AlphaStar proved particularly skilled at controlling and directing units in battle, known as micromanagement. One reason was that it viewed the whole game map at once—something a human player is not able to do—which made it seemingly able to control units in different areas at the same time. DeepMind researchers said the AIs only focused on a single part of the map at any given time, but interestingly, AlphaStar’s AI agent was limited to a more restricted camera view during the match “MaNA” won.

Potentially offsetting some of this advantage was the fact that AlphaStar was also restricted in certain ways. For example, it was prevented from performing more clicks per minute than a human player would be able to.

Where AIs Struggle
Games like StarCraft II and Dota 2 throw a lot of challenges at AIs. Complex game theory/ strategies, operating with imperfect/incomplete information, undertaking multi-variable and long-term planning, real-time decision-making, navigating a large action space, and making a multitude of possible decisions at every point in time are just the tip of the iceberg. The AIs’ performance in both games was impressive, but also highlighted some of the areas where they could be said to struggle.

In Dota 2 and StarCraft II, AI bots have seemed more vulnerable in longer games, or when confronted with surprising, unfamiliar strategies. They seem to struggle with complexity over time and improvisation/adapting to quick changes. This could be tied to how AIs learn. Even within the first few hours of performing a task, humans tend to gain a sense of familiarity and skill that takes an AI much longer. We are also better at transferring skill from one area to another. In other words, experience playing Dota 2 can help us become good at StarCraft II relatively quickly. This is not the case for AI—yet.

Dwindling Superiority
While the battle between AI and humans for absolute superiority is still on in Dota 2 and StarCraft II, it looks likely that AI will soon reign supreme. Similar things are happening to other types of games.

In 2017, a team from Carnegie Mellon University pitted its Libratus AI against four professionals. After 20 days of No Limit Texas Hold’em, Libratus was up by $1.7 million. Another likely candidate is the destroyer of family harmony at Christmas: Monopoly.

Poker involves bluffing, while Monopoly involves negotiation—skills you might not think AI would be particularly suited to handle. However, an AI experiment at Facebook showed that AI bots are more than capable of undertaking such tasks. The bots proved skilled negotiators, and developed negotiating strategies like pretending interest in one object while they were interested in another altogether—bluffing.

So, what games are we still better at than AI? There is no precise answer, but the list is getting shorter at a rapid pace.

The Aim Of the Game
While AI’s mastery of games might at first glance seem an odd area to focus research on, the belief is that the way AI learn to master a game is transferrable to other areas.

For example, the Libratus poker-playing AI employed strategies that could work in financial trading or political negotiations. The same applies to AlphaStar. As Oriol Vinyals, co-leader of the AlphaStar project, told The Verge:

“First and foremost, the mission at DeepMind is to build an artificial general intelligence. […] To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.”

A 2017 survey of more than 350 AI researchers predicts AI could be a better driver than humans within ten years. By the middle of the century, AI will be able to write a best-selling novel, and a few years later, it will be better than humans at surgery. By the year 2060, AI may do everything better than us.

Whether you think this is a good or a bad thing, it’s worth noting that AI has an often overlooked ability to help us see things differently. When DeepMind’s AlphaGo beat human Go champion Lee Sedol, the Go community learned from it, too. Lee himself went on a win streak after the match with AlphaGo. The same is now happening within the Dota 2 and StarCraft II communities that are studying the human vs. AI games intensely.

More than anything, AI’s recent gaming triumphs illustrate how quickly artificial intelligence is developing. In 1997, Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study at Princeton and a GO enthusiast, told the New York Times that:

”It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Image Credit: Roman Kosolapov / Shutterstock.com Continue reading

Posted in Human Robots

#434544 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Beats Pros at Starcraft in Another Triumph for Bots
Tom Simonite | Wired
“DeepMind’s feat is the most complex yet in a long train of contests in which computers have beaten top humans at games. Checkers fell in 1994, chess in 1997, and DeepMind’s earlier bot AlphaGo became the first to beat a champion at the board game Go in 2016. The StarCraft bot is the most powerful AI game player yet; it may also be the least unexpected.”

GENETICS
Complete Axolotl Genome Could Pave the Way Toward Human Tissue Regeneration
George Dvorsky | Gizmodo
“Now that researchers have a near-complete axolotl genome—the new assembly still requires a bit of fine-tuning (more on that in a bit)—they, along with others, can now go about the work of identifying the genes responsible for axolotl tissue regeneration.”

FUTURE
We Analyzed 16,625 Papers to Figure Out Where AI Is Headed Next
Karen Hao | MIT Technology Review
“…though deep learning has singlehandedly thrust AI into the public eye, it represents just a small blip in the history of humanity’s quest to replicate our own intelligence. It’s been at the forefront of that effort for less than 10 years. When you zoom out on the whole history of the field, it’s easy to realize that it could soon be on its way out.”

COMPUTING
Apple’s Finger-Controller Patent Is a Glimpse at Mixed Reality’s Future
Mark Sullivan | Fast Company
“[Apple’s] engineers are now looking past the phone touchscreen toward mixed reality, where the company’s next great UX will very likely be built. A recent patent application gives some tantalizing clues as to how Apple’s people are thinking about aspects of that challenge.”

GOVERNANCE
How Do You Govern Machines That Can Learn? Policymakers Are Trying to Figure That Out
Steve Lohr | The New York Times
“Regulation is coming. That’s a good thing. Rules of competition and behavior are the foundation of healthy, growing markets. That was the consensus of the policymakers at MIT. But they also agreed that artificial intelligence raises some fresh policy challenges.”

Image Credit: Victoria Shapiro / Shutterstock.com Continue reading

Posted in Human Robots

#434508 The Top Biotech and Medicine Advances to ...

2018 was bonkers for science.

From a woman who gave birth using a transplanted uterus, to the infamous CRISPR baby scandal, to forensics adopting consumer-based genealogy test kits to track down criminals, last year was a factory churning out scientific “whoa” stories with consequences for years to come.

With CRISPR still in the headlines, Britain ready to bid Europe au revoir, and multiple scientific endeavors taking off, 2019 is shaping up to be just as tumultuous.

Here are the science and health stories that may blow up in the new year. But first, a note of caveat: predicting the future is tough. Forecasting is the lovechild between statistics and (a good deal of) intuition, and entire disciplines have been dedicated to the endeavor. But January is the perfect time to gaze into the crystal ball for wisps of insight into the year to come. Last year we predicted the widespread approval of gene therapy products—on the most part, we nailed it. This year we’re hedging our bets with multiple predictions.

Gene Drives Used in the Wild
The concept of gene drives scares many, for good reason. Gene drives are a step up in severity (and consequences) from CRISPR and other gene-editing tools. Even with germline editing, in which the sperm, egg, or embryos are altered, gene editing affects just one genetic line—one family—at least at the beginning, before they reproduce with the general population.

Gene drives, on the other hand, have the power to wipe out entire species.

In a nutshell, they’re little bits of DNA code that help a gene transfer from parent to child with almost 100 percent perfect probability. The “half of your DNA comes from dad, the other comes from mom” dogma? Gene drives smash that to bits.

In other words, the only time one would consider using a gene drive is to change the genetic makeup of an entire population. It sounds like the plot of a supervillain movie, but scientists have been toying around with the idea of deploying the technology—first in mosquitoes, then (potentially) in rodents.

By releasing just a handful of mutant mosquitoes that carry gene drives for infertility, for example, scientists could potentially wipe out entire populations that carry infectious scourges like malaria, dengue, or Zika. The technology is so potent—and dangerous—the US Defense Advances Research Projects Agency is shelling out $65 million to suss out how to deploy, control, counter, or even reverse the effects of tampering with ecology.

Last year, the U.N. gave a cautious go-ahead for the technology to be deployed in the wild in limited terms. Now, the first release of a genetically modified mosquito is set for testing in Burkina Faso in Africa—the first-ever field experiment involving gene drives.

The experiment will only release mosquitoes in the Anopheles genus, which are the main culprits transferring disease. As a first step, over 10,000 male mosquitoes are set for release into the wild. These dudes are genetically sterile but do not cause infertility, and will help scientists examine how they survive and disperse as a preparation for deploying gene-drive-carrying mosquitoes.

Hot on the project’s heels, the nonprofit consortium Target Malaria, backed by the Bill and Melinda Gates foundation, is engineering a gene drive called Mosq that will spread infertility across the population or kill out all female insects. Their attempt to hack the rules of inheritance—and save millions in the process—is slated for 2024.

A Universal Flu Vaccine
People often brush off flu as a mere annoyance, but the infection kills hundreds of thousands each year based on the CDC’s statistical estimates.

The flu virus is actually as difficult of a nemesis as HIV—it mutates at an extremely rapid rate, making effective vaccines almost impossible to engineer on time. Scientists currently use data to forecast the strains that will likely explode into an epidemic and urge the public to vaccinate against those predictions. That’s partly why, on average, flu vaccines only have a success rate of roughly 50 percent—not much better than a coin toss.

Tired of relying on educated guesses, scientists have been chipping away at a universal flu vaccine that targets all strains—perhaps even those we haven’t yet identified. Often referred to as the “holy grail” in epidemiology, these vaccines try to alert our immune systems to parts of a flu virus that are least variable from strain to strain.

Last November, a first universal flu vaccine developed by BiondVax entered Phase 3 clinical trials, which means it’s already been proven safe and effective in a small numbers and is now being tested in a broader population. The vaccine doesn’t rely on dead viruses, which is a common technique. Rather, it uses a small chain of amino acids—the chemical components that make up proteins—to stimulate the immune system into high alert.

With the government pouring $160 million into the research and several other universal candidates entering clinical trials, universal flu vaccines may finally experience a breakthrough this year.

In-Body Gene Editing Shows Further Promise
CRISPR and other gene editing tools headed the news last year, including both downers suggesting we already have immunity to the technology and hopeful news of it getting ready for treating inherited muscle-wasting diseases.

But what wasn’t widely broadcasted was the in-body gene editing experiments that have been rolling out with gusto. Last September, Sangamo Therapeutics in Richmond, California revealed that they had injected gene-editing enzymes into a patient in an effort to correct a genetic deficit that prevents him from breaking down complex sugars.

The effort is markedly different than the better-known CAR-T therapy, which extracts cells from the body for genetic engineering before returning them to the hosts. Rather, Sangamo’s treatment directly injects viruses carrying the edited genes into the body. So far, the procedure looks to be safe, though at the time of reporting it was too early to determine effectiveness.

This year the company hopes to finally answer whether it really worked.

If successful, it means that devastating genetic disorders could potentially be treated with just a few injections. With a gamut of new and more precise CRISPR and other gene-editing tools in the works, the list of treatable inherited diseases is likely to grow. And with the CRISPR baby scandal potentially dampening efforts at germline editing via regulations, in-body gene editing will likely receive more attention if Sangamo’s results return positive.

Neuralink and Other Brain-Machine Interfaces
Neuralink is the stuff of sci fi: tiny implanted particles into the brain could link up your biological wetware with silicon hardware and the internet.

But that’s exactly what Elon Musk’s company, founded in 2016, seeks to develop: brain-machine interfaces that could tinker with your neural circuits in an effort to treat diseases or even enhance your abilities.

Last November, Musk broke his silence on the secretive company, suggesting that he may announce something “interesting” in a few months, that’s “better than anyone thinks is possible.”

Musk’s aspiration for achieving symbiosis with artificial intelligence isn’t the driving force for all brain-machine interfaces (BMIs). In the clinics, the main push is to rehabilitate patients—those who suffer from paralysis, memory loss, or other nerve damage.

2019 may be the year that BMIs and neuromodulators cut the cord in the clinics. These devices may finally work autonomously within a malfunctioning brain, applying electrical stimulation only when necessary to reduce side effects without requiring external monitoring. Or they could allow scientists to control brains with light without needing bulky optical fibers.

Cutting the cord is just the first step to fine-tuning neurological treatments—or enhancements—to the tune of your own brain, and 2019 will keep on bringing the music.

Image Credit: angellodeco / Shutterstock.com Continue reading

Posted in Human Robots

#434311 Understanding the Hidden Bias in ...

Facial recognition technology has progressed to point where it now interprets emotions in facial expressions. This type of analysis is increasingly used in daily life. For example, companies can use facial recognition software to help with hiring decisions. Other programs scan the faces in crowds to identify threats to public safety.

Unfortunately, this technology struggles to interpret the emotions of black faces. My new study, published last month, shows that emotional analysis technology assigns more negative emotions to black men’s faces than white men’s faces.

This isn’t the first time that facial recognition programs have been shown to be biased. Google labeled black faces as gorillas. Cameras identified Asian faces as blinking. Facial recognition programs struggled to correctly identify gender for people with darker skin.

My work contributes to a growing call to better understand the hidden bias in artificial intelligence software.

Measuring Bias
To examine the bias in the facial recognition systems that analyze people’s emotions, I used a data set of 400 NBA player photos from the 2016 to 2017 season, because players are similar in their clothing, athleticism, age and gender. Also, since these are professional portraits, the players look at the camera in the picture.

I ran the images through two well-known types of emotional recognition software. Both assigned black players more negative emotional scores on average, no matter how much they smiled.

For example, consider the official NBA pictures of Darren Collison and Gordon Hayward. Both players are smiling, and, according to the facial recognition and analysis program Face++, Darren Collison and Gordon Hayward have similar smile scores—48.7 and 48.1 out of 100, respectively.

Basketball players Darren Collision (left) and Gordon Hayward (right). basketball-reference.com

However, Face++ rates Hayward’s expression as 59.7 percent happy and 0.13 percent angry and Collison’s expression as 39.2 percent happy and 27 percent angry. Collison is viewed as nearly as angry as he is happy and far angrier than Hayward—despite the facial recognition program itself recognizing that both players are smiling.

In contrast, Microsoft’s Face API viewed both men as happy. Still, Collison is viewed as less happy than Hayward, with 98 and 93 percent happiness scores, respectively. Despite his smile, Collison is even scored with a small amount of contempt, whereas Hayward has none.

Across all the NBA pictures, the same pattern emerges. On average, Face++ rates black faces as twice as angry as white faces. Face API scores black faces as three times more contemptuous than white faces. After matching players based on their smiles, both facial analysis programs are still more likely to assign the negative emotions of anger or contempt to black faces.

Stereotyped by AI
My study shows that facial recognition programs exhibit two distinct types of bias.

First, black faces were consistently scored as angrier than white faces for every smile. Face++ showed this type of bias. Second, black faces were always scored as angrier if there was any ambiguity about their facial expression. Face API displayed this type of disparity. Even if black faces are partially smiling, my analysis showed that the systems assumed more negative emotions as compared to their white counterparts with similar expressions. The average emotional scores were much closer across races, but there were still noticeable differences for black and white faces.

This observation aligns with other research, which suggests that black professionals must amplify positive emotions to receive parity in their workplace performance evaluations. Studies show that people perceive black men as more physically threatening than white men, even when they are the same size.

Some researchers argue that facial recognition technology is more objective than humans. But my study suggests that facial recognition reflects the same biases that people have. Black men’s facial expressions are scored with emotions associated with threatening behaviors more often than white men, even when they are smiling. There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life.

Until facial recognition assesses black and white faces similarly, black people may need to exaggerate their positive facial expressions—essentially smile more—to reduce ambiguity and potentially negative interpretations by the technology.

Although innovative, artificial intelligence can perpetrate and exacerbate existing power dynamics, leading to disparate impact across racial/ethnic groups. Some societal accountability is necessary to ensure fairness to all groups because facial recognition, like most artificial intelligence, is often invisible to the people most affected by its decisions.

Lauren Rhue, Assistant Professor of Information Systems and Analytics, Wake Forest University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Alex_Po / Shutterstock.com Continue reading

Posted in Human Robots