Tag Archives: experiment

#436188 The Blogger Behind “AI ...

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume?

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.”

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.”

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101.

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume.

Janelle Shane on . . .

The un-delicious origin of her blog
“The narrower the problem, the smarter the AI will seem”
Why overestimating AI is dangerous
Giraffing!
Machine and human creativity

The un-delicious origin of her blog IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI?
Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.
I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.
Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about?
Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all.
Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?
Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set.
BACK TO TOP↑ “The narrower the problem, the smarter the AI will seem” Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game.
Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem.
The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.
Spectrum: That sounds… disturbing.
Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”
BACK TO TOP↑ Why overestimating AI is dangerous Spectrum: Do you see it as your role to puncture the AI hype?
Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn't it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is.
Spectrum: If people overestimate the abilities of AI, what risk does that pose?
Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.”

“If the algorithm’s task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias.”
—Janelle Shane, AI Weirdness blogger
That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand.
If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias.
Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks?
Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is.
BACK TO TOP↑ Giraffing Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?
Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns.
Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?
Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks.
There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two.
BACK TO TOP↑ Machine and human creativity Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?
Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people.

The Halloween costume algorithm “was able to draw on its knowledge of which words are related to suggest things like sexy barnacle.”
—Janelle Shane, AI Weirdness blogger
Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd?
Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman.
Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested?
Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts.
BACK TO TOP↑ Continue reading

Posted in Human Robots

#436184 Why People Demanded Privacy to Confide ...

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)

Human: Men are all alike.

Computer: IN WHAT WAY?

Human: They're always bugging us about something or other.

Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?

Human: Well, my boyfriend made me come here.

Computer: YOUR BOYFRIEND MADE YOU COME HERE

Human: He says I'm depressed much of the time.

Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED

Human: It's true. I am unhappy.

Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.

To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.

During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.

Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.

For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.

But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.

The simulation of intelligence, rather than intelligence itself, was enough to fool people.

In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.

In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.

Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.

This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#436176 We’re Making Progress in Explainable ...

Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.

But machine learning algorithms cannot explain the decisions they make.
How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions?

This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.

What Makes You Say That?
In some circumstances, you can see a road to explainable AI already. Take OpenAI’s GTP-2 model, or IBM’s Project Debater. Both of these generate text based on a large corpus of training data, and try to make it as relevant as possible to the prompt that’s given. If these models were also able to provide a quick run-down of the top few sources in that corpus of training data they were drawing information from, it may be easier to understand where the “argument” (or poetic essay about unicorns) was coming from.

This is similar to the approach Google is now looking at for its image classifiers. Many algorithms are more sensitive to textures and the relationship between adjacent pixels in an image, rather than recognizing objects by their outlines as humans do. This leads to strange results: some algorithms can happily identify a totally scrambled image of a polar bear, but not a polar bear silhouette.

Previous attempts to make image classifiers explainable relied on significance mapping. In this method, the algorithm would highlight the areas of the image that contributed the most statistical weight to making the decision. This is usually determined by changing groups of pixels in the image and seeing which contribute to the biggest change in the algorithm’s impression of what the image is. For example, if the algorithm is trying to recognize a stop sign, changing the background is unlikely to be as important as changing the sign.

Google’s new approach changes the way that its algorithm recognizes objects, by examining them at several different resolutions and searching for matches to different “sub-objects” within the main object. You or I might recognize an ambulance from its flashing lights, its tires, and its logo; we might zoom in on the basketball held by an NBA player to deduce their occupation, and so on. By linking the overall categorization of an image to these “concepts,” the algorithm can explain its decision: I categorized this as a cat because of its tail and whiskers.

Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.

Can You Explain What You Don’t Understand?
While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?

At one end of the spectrum, Good Old-Fashioned AI or GOFAI dreamed up machines that would be entirely based on symbolic logic. The machine would be hard-coded with the concept of a dog, a flower, cars, and so forth, alongside all of the symbolic “rules” which we internalize, allowing us to distinguish between dogs, flowers, and cars. (You can imagine a similar approach to a conversational AI would teach it words and strict grammatical structures from the top down, rather than “learning” languages from statistical associations between letters and words in training data, as GPT-2 broadly does.)

Such a system would be able to explain itself, because it would deal in high-level, human-understandable concepts. The equation is closer to: “ball” + “stitches” + “white” = “baseball”, rather than a set of millions of numbers linking various pathways together. There are elements of GOFAI in Google’s new approach to explaining its image recognition: the new algorithm can recognize objects based on the sub-objects they contain. To do this, it requires at least a rudimentary understanding of what those sub-objects look like, and the rules that link objects to sub-objects, such as “cats have whiskers.”

The issue, of course, is the—maybe impossible—labor-intensive task of defining all these symbolic concepts and every conceivable rule that could possibly link them together by hand. The difficulty of creating systems like this, which could handle the “combinatorial explosion” present in reality, helped to lead to the first AI winter.

Meanwhile, neural networks rely on training themselves on vast sets of data. Without the “labeling” of supervised learning, this process might bear no relation to any concepts a human could understand (and therefore be utterly inexplicable).

Somewhere between these two, hope explainable AI enthusiasts, is a happy medium that can crunch colossal amounts of data, giving us all of the benefits that recent, neural-network AI has bestowed, while showing its working in terms that humans can understand.

Image Credit: Image by Seanbatty from Pixabay Continue reading

Posted in Human Robots

#436165 Video Friday: DJI’s Mavic Mini Is ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

DJI’s new Mavic Mini looks like a pretty great drone for US $400 ($500 for a combo with more accessories): It’s tiny, flies for 30 minutes, and will do what you need as far as pictures and video (although not a whole lot more).

DJI seems to have put a bunch of effort into making the drone 249 grams, 1 gram under what’s required for FAA registration. That means you save $5 and a few minutes of your time, but that does not mean you don’t have to follow the FAA’s rules and regulations governing drone use.

[ DJI ]

Don’t panic, but Clearpath and HEBI Robotics have armed the Jackal:

After locking eyes across a crowded room at ICRA 2019, Clearpath Robotics and HEBI Robotics basked in that warm and fuzzy feeling that comes with starting a new and exciting relationship. Over a conference hall coffee, they learned that the two companies have many overlapping interests. The most compelling was the realization that customers across a variety of industries are hunting for an elusive true love of their own – a robust but compact robotic platform combined with a long reach manipulator for remote inspection tasks.

After ICRA concluded, Arron Griffiths, Application Engineer at Clearpath, and Matthew Tesch, Software Engineer at HEBI, kept in touch and decided there had been enough magic in the air to warrant further exploration. A couple of months later, Matthew arrived at Clearpath to formally introduce the HEBI’s X-Series Arm to Clearpath’s Jackal UGV. It was love.

[ Clearpath ]

Thanks Dave!

I’m really not a fan of the people-carrying drones, but heavy lift cargo drones seem like a more okay idea.

Volocopter, the pioneer in Urban Air Mobility, presented the demonstrator of its VoloDrone. This marks Volocopters expansion into the logistics, agriculture, infrastructure and public services industry. The VoloDrone is an unmanned, fully electric, heavy-lift utility drone capable of carrying a payload of 200 kg (440 lbs) up to 40 km (25 miles). With a standardized payload attachment, VoloDrone can serve a great variety of purposes from transporting boxes, to liquids, to equipment and beyond. It can be remotely piloted or flown in automated mode on pre-set routes.

[ Volocopter ]

JAY is a mobile service robot that projects a display on the floor and plays sound with its speaker. By playing sounds and videos, it provides visual and audio entertainment in various places such as exhibition halls, airports, hotels, department stores and more.

[ Rainbow Robotics ]

The DARPA Subterranean Challenge Virtual Tunnel Circuit concluded this week—it was the same idea as the physical challenge that took place in August, just with a lot less IRL dirt.

The awards ceremony and team presentations are in this next video, and we’ll have more on this once we get back from IROS.

[ DARPA SubT ]

NASA is sending a mobile robot to the south pole of the Moon to get a close-up view of the location and concentration of water ice in the region and for the first time ever, actually sample the water ice at the same pole where the first woman and next man will land in 2024 under the Artemis program.

About the size of a golf cart, the Volatiles Investigating Polar Exploration Rover, or VIPER, will roam several miles, using its four science instruments — including a 1-meter drill — to sample various soil environments. Planned for delivery in December 2022, VIPER will collect about 100 days of data that will be used to inform development of the first global water resource maps of the Moon.

[ NASA ]

Happy Halloween from HEBI Robotics!

[ HEBI ]

Happy Halloween from Soft Robotics!

[ Soft Robotics ]

Halloween must be really, really confusing for autonomous cars.

[ Waymo ]

Once a year at Halloween, hardworking JPL engineers put their skills to the test in a highly competitive pumpkin carving contest. The result: A pumpkin gently landed on the Moon, its retrorockets smoldering, while across the room a Nemo-inspired pumpkin explored the sub-surface ocean of Jupiter moon Europa. Suffice to say that when the scientists and engineers at NASA’s Jet Propulsion Laboratory compete in a pumpkin-carving contest, the solar system’s the limit. Take a look at some of the masterpieces from 2019.

Now in its ninth year, the contest gives teams only one hour to carve and decorate their pumpkin though they can prepare non-pumpkin materials – like backgrounds, sound effects and motorized parts – ahead of time.

[ JPL ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

[ BPL ]

Misty II is now available to anyone who wants one, and she’s on sale for a mere $2900.

[ Misty ]

We leveraged LIDAR-based slam, in conjunction with our specialized relative localization sensor UVDAR to perform a de-centralized, communication-free swarm flight without the units knowing their absolute locations. The swarming and obstacle avoidance control is based on a modified Boids-like algorithm, while the whole swarm is controlled by directing a selected leader unit.

[ MRS ]

The MallARD robot is an autonomous surface vehicle (ASV), designed for the monitoring and inspection of wet storage facilities for example spent fuel pools or wet silos. The MallARD is holonomic, uses a LiDAR for localisation and features a robust trajectory tracking controller.

The University of Manchester’s researcher Dr Keir Groves designed and built the autonomous surface vehicle (ASV) for the challenge which came in the top three of the second round in Nov 2017. The MallARD went on to compete in a final 3rd round where it was deployed in a spent fuel pond at a nuclear power plant in Finland by the IAEA, along with two other entries. The MallARD came second overall, in November 2018.

[ RNE ]

Thanks Jennifer!

I sometimes get the sense that in the robotic grasping and manipulation world, suction cups are kinda seen as cheating at times. But, their nature allows you to do some pretty interesting things.

More clever octopus footage please.

[ CMU ]

A Personal, At-Home Teacher For Playful Learning: From academic topics to child-friendly news bulletins, fun facts and more, Miko 2 is packed with relevant and freshly updated content specially designed by educationists and child-specialists. Your little one won’t even realize they’re learning.

As we point out pretty much every time we post a video like this, keep in mind that you’re seeing a heavily edited version of a hypothetical best case scenario for how this robot can function. And things like “creating a relationship that they can then learn how to form with their peers” is almost certainly overselling things. But at $300 (shipping included), this may be a decent robot as long as your expectations are appropriately calibrated.

[ Miko ]

ICRA 2018 plenary talk by Rodney Brooks: “Robots and People: the Research Challenge.”

[ IEEE RAS ]

ICRA-X 2018 talk by Ron Arkin: “Lethal Autonomous Robots and the Plight of the Noncombatant.”

[ IEEE RAS ]

On the most recent episode of the AI Podcast, Lex Fridman interviews Garry Kasparov.

[ AI Podcast ] Continue reading

Posted in Human Robots

#436126 Quantum Computing Gets a Boost From AI ...

Illustration: Greg Mably

Anyone of a certain age who has even a passing interest in computers will remember the remarkable breakthrough that IBM made in 1997 when its Deep Blue chess-playing computer defeated Garry Kasparov, then the world chess champion. Computer scientists passed another such milestone in March 2016, when DeepMind (a subsidiary of Alphabet, Google’s parent company) announced that its AlphaGo program had defeated world-champion player Lee Sedol in the game of Go, a board game that had vexed AI researchers for decades. Recently, DeepMind’s algorithms have also bested human players in the computer games StarCraft IIand Quake Arena III.

Some believe that the cognitive capacities of machines will overtake those of human beings in many spheres within a few decades. Others are more cautious and point out that our inability to understand the source of our own cognitive powers presents a daunting hurdle. How can we make thinking machines if we don’t fully understand our own thought processes?

Citizen science, which enlists masses of people to tackle research problems, holds promise here, in no small part because it can be used effectively to explore the boundary between human and artificial intelligence.

Some citizen-science projects ask the public to collect data from their surroundings (as eButterfly does for butterflies) or to monitor delicate ecosystems (as Eye on the Reef does for Australia’s Great Barrier Reef). Other projects rely on online platforms on which people help to categorize obscure phenomena in the night sky (Zooniverse) or add to the understanding of the structure of proteins (Foldit). Typically, people can contribute to such projects without any prior knowledge of the subject. Their fundamental cognitive skills, like the ability to quickly recognize patterns, are sufficient.

In order to design and develop video games that can allow citizen scientists to tackle scientific problems in a variety of fields, professor and group leader Jacob Sherson founded ScienceAtHome (SAH), at Aarhus University, in Denmark. The group began by considering topics in quantum physics, but today SAH hosts games covering other areas of physics, math, psychology, cognitive science, and behavioral economics. We at SAH search for innovative solutions to real research challenges while providing insight into how people think, both alone and when working in groups.

It is computationally intractable to completely map out a higher-dimensional landscape: It is called the curse of high dimensionality, and it plagues many optimization problems.

We believe that the design of new AI algorithms would benefit greatly from a better understanding of how people solve problems. This surmise has led us to establish the Center for Hybrid Intelligence within SAH, which tries to combine human and artificial intelligence, taking advantage of the particular strengths of each. The center’s focus is on the gamification of scientific research problems and the development of interfaces that allow people to understand and work together with AI.

Our first game, Quantum Moves, was inspired by our group’s research into quantum computers. Such computers can in principle solve certain problems that would take a classical computer billions of years. Quantum computers could challenge current cryptographic protocols, aid in the design of new materials, and give insight into natural processes that require an exact solution of the equations of quantum mechanics—something normal computers are inherently bad at doing.

One candidate system for building such a computer would capture individual atoms by “freezing” them, as it were, in the interference pattern produced when a laser beam is reflected back on itself. The captured atoms can thus be organized like eggs in a carton, forming a periodic crystal of atoms and light. Using these atoms to perform quantum calculations requires that we use tightly focused laser beams, called optical tweezers, to transport the atoms from site to site in the light crystal. This is a tricky business because individual atoms do not behave like particles; instead, they resemble a wavelike liquid governed by the laws of quantum mechanics.

In Quantum Moves, a player manipulates a touch screen or mouse to move a simulated laser tweezer and pick up a trapped atom, represented by a liquidlike substance in a bowl. Then the player must bring the atom back to the tweezer’s initial position while trying to minimize the sloshing of the liquid. Such sloshing would increase the energy of the atom and ultimately introduce errors into the operations of the quantum computer. Therefore, at the end of a move, the liquid should be at a complete standstill.

To understand how people and computers might approach such a task differently, you need to know something about how computerized optimization algorithms work. The countless ways of moving a glass of water without spilling may be regarded as constituting a “solution landscape.” One solution is represented by a single point in that landscape, and the height of that point represents the quality of the solution—how smoothly and quickly the glass of water was moved. This landscape might resemble a mountain range, where the top of each mountain represents a local optimum and where the challenge is to find the highest peak in the range—the global optimum.

Illustration: Greg Mably

Researchers must compromise between searching the landscape for taller mountains (“exploration”) and climbing to the top of the nearest mountain (“exploitation”). Making such a trade-off may seem easy when exploring an actual physical landscape: Merely hike around a bit to get at least the general lay of the land before surveying in greater detail what seems to be the tallest peak. But because each possible way of changing the solution defines a new dimension, a realistic problem can have thousands of dimensions. It is computationally intractable to completely map out such a higher-dimensional landscape. We call this the curse of high dimensionality, and it plagues many optimization problems.

Although algorithms are wonderfully efficient at crawling to the top of a given mountain, finding good ways of searching through the broader landscape poses quite a challenge, one that is at the forefront of AI research into such control problems. The conventional approach is to come up with clever ways of reducing the search space, either through insights generated by researchers or with machine-learning algorithms trained on large data sets.

At SAH, we attacked certain quantum-optimization problems by turning them into a game. Our goal was not to show that people can beat computers in this arena but rather to understand the process of generating insights into such problems. We addressed two core questions: whether allowing players to explore the infinite space of possibilities will help them find good solutions and whether we can learn something by studying their behavior.

Today, more than 250,000 people have played Quantum Moves, and to our surprise, they did in fact search the space of possible moves differently from the algorithm we had put to the task. Specifically, we found that although players could not solve the optimization problem on their own, they were good at searching the broad landscape. The computer algorithms could then take those rough ideas and refine them.

Herbert A. Simon said that “solving a problem simply means representing it so as to make the solution transparent.” Apparently, that’s what our games can do with their novel user interfaces.

Perhaps even more interesting was our discovery that players had two distinct ways of solving the problem, each with a clear physical interpretation. One set of players started by placing the tweezer close to the atom while keeping a barrier between the atom trap and the tweezer. In classical physics, a barrier is an impenetrable obstacle, but because the atom liquid is a quantum-mechanical object, it can tunnel through the barrier into the tweezer, after which the player simply moved the tweezer to the target area. Another set of players moved the tweezer directly into the atom trap, picked up the atom liquid, and brought it back. We called these two strategies the “tunneling” and “shoveling” strategies, respectively.

Such clear strategies are extremely valuable because they are very difficult to obtain directly from an optimization algorithm. Involving humans in the optimization loop can thus help us gain insight into the underlying physical phenomena that are at play, knowledge that may then be transferred to other types of problems.

Quantum Moves raised several obvious issues. First, because generating an exceptional solution required further computer-based optimization, players were unable to get immediate feedback to help them improve their scores, and this often left them feeling frustrated. Second, we had tested this approach on only one scientific challenge with a clear classical analogue, that of the sloshing liquid. We wanted to know whether such gamification could be applied more generally, to a variety of scientific challenges that do not offer such immediately applicable visual analogies.

We address these two concerns in Quantum Moves 2. Here, the player first generates a number of candidate solutions by playing the original game. Then the player chooses which solutions to optimize using a built-in algorithm. As the algorithm improves a player’s solution, it modifies the solution path—the movement of the tweezer—to represent the optimized solution. Guided by this feedback, players can then improve their strategy, come up with a new solution, and iteratively feed it back into this process. This gameplay provides high-level heuristics and adds human intuition to the algorithm. The person and the machine work in tandem—a step toward true hybrid intelligence.

In parallel with the development of Quantum Moves 2, we also studied how people collaboratively solve complex problems. To that end, we opened our atomic physics laboratory to the general public—virtually. We let people from around the world dictate the experiments we would run to see if they would find ways to improve the results we were getting. What results? That’s a little tricky to explain, so we need to pause for a moment and provide a little background on the relevant physics.

One of the essential steps in building the quantum computer along the lines described above is to create the coldest state of matter in the universe, known as a Bose-Einstein condensate. Here millions of atoms oscillate in synchrony to form a wavelike substance, one of the largest purely quantum phenomena known. To create this ultracool state of matter, researchers typically use a combination of laser light and magnetic fields. There is no familiar physical analogy between such a strange state of matter and the phenomena of everyday life.

The result we were seeking in our lab was to create as much of this enigmatic substance as was possible given the equipment available. The sequence of steps to accomplish that was unknown. We hoped that gamification could help to solve this problem, even though it had no classical analogy to present to game players.

Images: ScienceAtHome

Fun and Games: The
Quantum Moves game evolved over time, from a relatively crude early version [top] to its current form [second from top] and then a major revision,
Quantum Moves 2 [third from top].
Skill Lab: Science Detective games [bottom] test players’ cognitive skills.

In October 2016, we released a game that, for two weeks, guided how we created Bose-Einstein condensates in our laboratory. By manipulating simple curves in the game interface, players generated experimental sequences for us to use in producing these condensates—and they did so without needing to know anything about the underlying physics. A player would generate such a solution, and a few minutes later we would run the sequence in our laboratory. The number of ultracold atoms in the resulting Bose-Einstein condensate was measured and fed back to the player as a score. Players could then decide either to try to improve their previous solution or to copy and modify other players’ solutions. About 600 people from all over the world participated, submitting 7,577 solutions in total. Many of them yielded bigger condensates than we had previously produced in the lab.

So this exercise succeeded in achieving our primary goal, but it also allowed us to learn something about human behavior. We learned, for example, that players behave differently based on where they sit on the leaderboard. High-performing players make small changes to their successful solutions (exploitation), while poorly performing players are willing to make more dramatic changes (exploration). As a collective, the players nicely balance exploration and exploitation. How they do so provides valuable inspiration to researchers trying to understand human problem solving in social science as well as to those designing new AI algorithms.

How could mere amateurs outperform experienced experimental physicists? The players certainly weren’t better at physics than the experts—but they could do better because of the way in which the problem was posed. By turning the research challenge into a game, we gave players the chance to explore solutions that had previously required complex programming to study. Indeed, even expert experimentalists improved their solutions dramatically by using this interface.

Insight into why that’s possible can probably be found in the words of the late economics Nobel laureate Herbert A. Simon: “Solving a problem simply means representing it so as to make the solution transparent [PDF].” Apparently, that’s what our games can do with their novel user interfaces. We believe that such interfaces might be a key to using human creativity to solve other complex research problems.

Eventually, we’d like to get a better understanding of why this kind of gamification works as well as it does. A first step would be to collect more data on what the players do while they are playing. But even with massive amounts of data, detecting the subtle patterns underlying human intuition is an overwhelming challenge. To advance, we need a deeper insight into the cognition of the individual players.

As a step forward toward this goal, ScienceAtHome created Skill Lab: Science Detective, a suite of minigames exploring visuospatial reasoning, response inhibition, reaction times, and other basic cognitive skills. Then we compare players’ performance in the games with how well these same people did on established psychological tests of those abilities. The point is to allow players to assess their own cognitive strengths and weaknesses while donating their data for further public research.

In the fall of 2018 we launched a prototype of this large-scale profiling in collaboration with the Danish Broadcasting Corp. Since then more than 20,000 people have participated, and in part because of the publicity granted by the public-service channel, participation has been very evenly distributed across ages and by gender. Such broad appeal is rare in social science, where the test population is typically drawn from a very narrow demographic, such as college students.

Never before has such a large academic experiment in human cognition been conducted. We expect to gain new insights into many things, among them how combinations of cognitive abilities sharpen or decline with age, what characteristics may be used to prescreen for mental illnesses, and how to optimize the building of teams in our work lives.

And so what started as a fun exercise in the weird world of quantum mechanics has now become an exercise in understanding the nuances of what makes us human. While we still seek to understand atoms, we can now aspire to understand people’s minds as well.

This article appears in the November 2019 print issue as “A Man-Machine Mind Meld for Quantum Computing.”

About the Authors
Ottó Elíasson, Carrie Weidner, Janet Rafner, and Shaeema Zaman Ahmed work with the ScienceAtHome project at Aarhus University in Denmark. Continue reading

Posted in Human Robots