#434616 What Games Are Humans Still Better at ...Artificial intelligence (AI) systems’ rapid advances are continually crossing rows off the list of things humans do better than our computer compatriots. AI has bested us at board games like chess and Go, and set astronomically high scores in classic computer games like Ms. Pacman. More complex games form part of AI’s next frontier. While a team of AI bots developed by OpenAI, known as the OpenAI Five, ultimately lost to a team of professional players last year, they have since been running rampant against human opponents in Dota 2. Not to be outdone, Google’s DeepMind AI recently took on—and beat—several professional players at StarCraft II. These victories beg the questions: what games are humans still better at than AI? And for how long? The Making Of AlphaStarDeepMind’s results provide a good starting point in a search for answers. The version of its AI for StarCraft II, dubbed AlphaStar, learned to play the games through supervised learning and reinforcement learning. First, AI agents were trained by analyzing and copying human players, learning basic strategies. The initial agents then played each other in a sort of virtual death match where the strongest agents stayed on. New iterations of the agents were developed and entered the competition. Over time, the agents became better and better at the game, learning new strategies and tactics along the way. One of the advantages of AI is that it can go through this kind of process at superspeed and quickly develop better agents. DeepMind researchers estimate that the AlphaStar agents went through the equivalent of roughly 200 years of game time in about 14 days. Cheating or One Hand Behind the Back?The AlphaStar AI agents faced off against human professional players in a series of games streamed on YouTube and Twitch. The AIs trounced their human opponents, winning ten games on the trot, before pro player Grzegorz “MaNa” Komincz managed to salvage some pride for humanity by winning the final game. Experts commenting on AlphaStar’s performance used words like “phenomenal” and “superhuman”—which was, to a degree, where things got a bit problematic. AlphaStar proved particularly skilled at controlling and directing units in battle, known as micromanagement. One reason was that it viewed the whole game map at once—something a human player is not able to do—which made it seemingly able to control units in different areas at the same time. DeepMind researchers said the AIs only focused on a single part of the map at any given time, but interestingly, AlphaStar’s AI agent was limited to a more restricted camera view during the match “MaNA” won. Potentially offsetting some of this advantage was the fact that AlphaStar was also restricted in certain ways. For example, it was prevented from performing more clicks per minute than a human player would be able to. Where AIs StruggleGames like StarCraft II and Dota 2 throw a lot of challenges at AIs. Complex game theory/ strategies, operating with imperfect/incomplete information, undertaking multi-variable and long-term planning, real-time decision-making, navigating a large action space, and making a multitude of possible decisions at every point in time are just the tip of the iceberg. The AIs’ performance in both games was impressive, but also highlighted some of the areas where they could be said to struggle. In Dota 2 and StarCraft II, AI bots have seemed more vulnerable in longer games, or when confronted with surprising, unfamiliar strategies. They seem to struggle with complexity over time and improvisation/adapting to quick changes. This could be tied to how AIs learn. Even within the first few hours of performing a task, humans tend to gain a sense of familiarity and skill that takes an AI much longer. We are also better at transferring skill from one area to another. In other words, experience playing Dota 2 can help us become good at StarCraft II relatively quickly. This is not the case for AI—yet. Dwindling SuperiorityWhile the battle between AI and humans for absolute superiority is still on in Dota 2 and StarCraft II, it looks likely that AI will soon reign supreme. Similar things are happening to other types of games. In 2017, a team from Carnegie Mellon University pitted its Libratus AI against four professionals. After 20 days of No Limit Texas Hold’em, Libratus was up by $1.7 million. Another likely candidate is the destroyer of family harmony at Christmas: Monopoly. Poker involves bluffing, while Monopoly involves negotiation—skills you might not think AI would be particularly suited to handle. However, an AI experiment at Facebook showed that AI bots are more than capable of undertaking such tasks. The bots proved skilled negotiators, and developed negotiating strategies like pretending interest in one object while they were interested in another altogether—bluffing. So, what games are we still better at than AI? There is no precise answer, but the list is getting shorter at a rapid pace. The Aim Of the GameWhile AI’s mastery of games might at first glance seem an odd area to focus research on, the belief is that the way AI learn to master a game is transferrable to other areas. For example, the Libratus poker-playing AI employed strategies that could work in financial trading or political negotiations. The same applies to AlphaStar. As Oriol Vinyals, co-leader of the AlphaStar project, told The Verge: “First and foremost, the mission at DeepMind is to build an artificial general intelligence. […] To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.” A 2017 survey of more than 350 AI researchers predicts AI could be a better driver than humans within ten years. By the middle of the century, AI will be able to write a best-selling novel, and a few years later, it will be better than humans at surgery. By the year 2060, AI may do everything better than us. Whether you think this is a good or a bad thing, it’s worth noting that AI has an often overlooked ability to help us see things differently. When DeepMind’s AlphaGo beat human Go champion Lee Sedol, the Go community learned from it, too. Lee himself went on a win streak after the match with AlphaGo. The same is now happening within the Dota 2 and StarCraft II communities that are studying the human vs. AI games intensely. More than anything, AI’s recent gaming triumphs illustrate how quickly artificial intelligence is developing. In 1997, Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study at Princeton and a GO enthusiast, told the New York Times that: ”It may be a hundred years before a computer beats humans at Go—maybe even longer.” Image Credit: Roman Kosolapov / Shutterstock.com
This entry was posted in Human Robots and tagged 2017, action, ai, answer, area, artificial, Artificial General Intelligence, Artificial intelligence, back, bad, battle, beat, before, best, better, both, bots, build, camera, can, carnegie, complexity, computer, control, controlling, days, death, developing, different, dr, every, everything, experience, experiment, family, first, five, general, good, google, hand, Handle, help, high, hold, human, humanity, humans, ii, institute, intelligence, interest, kind, large, learn, learning, making, master, might, new, object, operating, part, Performance, planning, play, player, possible, predicts, pro, project, questions, rapid, real, recent, research, researchers, running, see, sense, series, since, skill, skills, something, soon, Space, study, surgery, systems, term, texas, theory, think, throw, time, times, university, version, view, virtual, vs, way, ways, win, work, Would, year, years, york, youtube. Bookmark the permalink.
|
-
Humanoid Gallery
Popular Searches
Copyright © 2024 Android Humanoid - All Rights Reserved
All trademarks and copyrights owned by their respective owners and are used for illustration only