Tag Archives: can
#436184 Why People Demanded Privacy to Confide ...
This is part four of a six-part series on the history of natural language processing.
Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].
While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”
He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.
Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.
To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)
Human: Men are all alike.
Computer: IN WHAT WAY?
Human: They're always bugging us about something or other.
Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?
Human: Well, my boyfriend made me come here.
Computer: YOUR BOYFRIEND MADE YOU COME HERE
Human: He says I'm depressed much of the time.
Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED
Human: It's true. I am unhappy.
Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.
To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.
During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.
Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.
For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.
But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.
The simulation of intelligence, rather than intelligence itself, was enough to fool people.
In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.
In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.
Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.
This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”
You can also check out our prior series on the untold history of AI. Continue reading
#436180 Bipedal Robot Cassie Cal Learns to ...
There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.
UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.
Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.
This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.
For a bit more detail, we spoke with Albert Li via email.
Image: UC Berkeley
UC Berkeley’s Cassie Cal getting ready to juggle.
IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?
Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.
What keeps Cassie from juggling indefinitely?
There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.
Would Cassie be able to juggle while walking (or hovershoe-ing)?
Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.
Can you give an example of a practical task that would be made possible by using a controller like this?
Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.
Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.
You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next?
We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).
On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.
[ Hybrid Robotics Lab ] Continue reading
#436176 We’re Making Progress in Explainable ...
Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.
But machine learning algorithms cannot explain the decisions they make.
How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions?
This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.
What Makes You Say That?
In some circumstances, you can see a road to explainable AI already. Take OpenAI’s GTP-2 model, or IBM’s Project Debater. Both of these generate text based on a large corpus of training data, and try to make it as relevant as possible to the prompt that’s given. If these models were also able to provide a quick run-down of the top few sources in that corpus of training data they were drawing information from, it may be easier to understand where the “argument” (or poetic essay about unicorns) was coming from.
This is similar to the approach Google is now looking at for its image classifiers. Many algorithms are more sensitive to textures and the relationship between adjacent pixels in an image, rather than recognizing objects by their outlines as humans do. This leads to strange results: some algorithms can happily identify a totally scrambled image of a polar bear, but not a polar bear silhouette.
Previous attempts to make image classifiers explainable relied on significance mapping. In this method, the algorithm would highlight the areas of the image that contributed the most statistical weight to making the decision. This is usually determined by changing groups of pixels in the image and seeing which contribute to the biggest change in the algorithm’s impression of what the image is. For example, if the algorithm is trying to recognize a stop sign, changing the background is unlikely to be as important as changing the sign.
Google’s new approach changes the way that its algorithm recognizes objects, by examining them at several different resolutions and searching for matches to different “sub-objects” within the main object. You or I might recognize an ambulance from its flashing lights, its tires, and its logo; we might zoom in on the basketball held by an NBA player to deduce their occupation, and so on. By linking the overall categorization of an image to these “concepts,” the algorithm can explain its decision: I categorized this as a cat because of its tail and whiskers.
Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.
Can You Explain What You Don’t Understand?
While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?
At one end of the spectrum, Good Old-Fashioned AI or GOFAI dreamed up machines that would be entirely based on symbolic logic. The machine would be hard-coded with the concept of a dog, a flower, cars, and so forth, alongside all of the symbolic “rules” which we internalize, allowing us to distinguish between dogs, flowers, and cars. (You can imagine a similar approach to a conversational AI would teach it words and strict grammatical structures from the top down, rather than “learning” languages from statistical associations between letters and words in training data, as GPT-2 broadly does.)
Such a system would be able to explain itself, because it would deal in high-level, human-understandable concepts. The equation is closer to: “ball” + “stitches” + “white” = “baseball”, rather than a set of millions of numbers linking various pathways together. There are elements of GOFAI in Google’s new approach to explaining its image recognition: the new algorithm can recognize objects based on the sub-objects they contain. To do this, it requires at least a rudimentary understanding of what those sub-objects look like, and the rules that link objects to sub-objects, such as “cats have whiskers.”
The issue, of course, is the—maybe impossible—labor-intensive task of defining all these symbolic concepts and every conceivable rule that could possibly link them together by hand. The difficulty of creating systems like this, which could handle the “combinatorial explosion” present in reality, helped to lead to the first AI winter.
Meanwhile, neural networks rely on training themselves on vast sets of data. Without the “labeling” of supervised learning, this process might bear no relation to any concepts a human could understand (and therefore be utterly inexplicable).
Somewhere between these two, hope explainable AI enthusiasts, is a happy medium that can crunch colossal amounts of data, giving us all of the benefits that recent, neural-network AI has bestowed, while showing its working in terms that humans can understand.
Image Credit: Image by Seanbatty from Pixabay Continue reading
#436174 How Selfish Are You? It Matters for ...
Our personalities impact almost everything we do, from the career path we choose to the way we interact with others to how we spend our free time.
But what about the way we drive—could personality be used to predict whether a driver will cut someone off, speed, or, say, zoom through a yellow light instead of braking?
There must be something to the idea that those of us who are more mild-mannered are likely to drive a little differently than the more assertive among us. At least, that’s what a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is betting on.
“Working with and around humans means figuring out their intentions to better understand their behavior,” said graduate student Wilko Schwarting, lead author on the paper published this week in Proceedings of the National Academy of Sciences. “People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper we sought to understand if this was something we could actually quantify.”
The team is building a model that classifies drivers according to how selfish or selfless they are, then uses that classification to help predict how drivers will behave on the road. Ideally, the system will help improve safety for self-driving cars by integrating a degree of ‘humanity’ into how their software perceives its surroundings; right now, human drivers and their cars are just another object, not much different than a tree or a sign.
But unlike trees and signs, humans have behavioral patterns and motivations. For greater success on roads that are still dominated by us mercurial humans, the CSAIL team believes, driverless cars should take our personalities into account.
How Selfish Are You?
About how important is your own well-being to you vs. the well-being of other people? It’s a hard question to answer without specifying who the other people are; your answer would likely differ if we’re talking about your friends, loved ones, strangers, or people you actively dislike.
In social psychology, social value orientation (SVO) refers to people’s preferences for allocating resources between themselves and others. The two broad categories people can fall into are pro-social (people who are more cooperative, and expect cooperation from others) and pro-self (pretty self-explanatory: “Me first!”).
Based on drivers’ behavior in two different road scenarios—merging and making a left turn—the CSAIL team’s model classified drivers as pro-social or egoistic. Slowing down to let someone merge into your lane in front of you would earn you a pro-social classification, while cutting someone off or not slowing down to allow a left turn would make you egoistic.
On the Road
The system then uses these classifications to model and predict drivers’ behavior. The team demonstrated that using their model, errors in predicting the behavior of other cars were reduced by 25 percent.
In a left-turn simulation, for example, their car would wait when an approaching car had an egoistic driver, but go ahead and make the turn when the other driver was prosocial. Similarly, if a self-driving car is trying to merge into the left lane and it’s identified the drivers in that lane as egoistic, it will assume they won’t slow down to let it in, and will wait to merge behind them. If, on the other hand, the self-driving car knows that the human drivers in the left lane are prosocial, it will attempt to merge between them since they’re likely to let it in.
So how does this all translate to better safety?
It’s essentially a starting point for imbuing driverless cars with some of the abilities and instincts that are innate to humans. If you’re driving down the highway and you see a car swerving outside its lane, you’ll probably distance yourself from that car because you know it’s more likely to cause an accident. Our senses take in information we can immediately interpret and act on, and this includes predictions about what might happen based on observations of what just happened. Our observations can clue us in to a driver’s personality (the swerver must be careless) or simply to the circumstances of a given moment (the swerver was texting).
But right now, self-driving cars assume all human drivers behave the same way, and they have no mechanism for incorporating observations about behavioral differences between drivers into their decisions.
“Creating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV’s actions,” said Schwarting.
Though it may feel a bit unsettling to think of an algorithm lumping you into a category and driving accordingly around you, maybe it’s less unsettling than thinking of self-driving cars as pre-programmed, oblivious robots unable to adapt to different driving styles.
The team’s next step is to apply their model to pedestrians, bikes, and other agents frequently found in driving environments. They also plan to look into other robotic systems acting among people, like household robots, and integrating social value orientation into their algorithms.
Image Credit: Image by Free-Photos from Pixabay Continue reading