Tag Archives: human-like
#437253 Powerful human-like hands create safer ...
Need a robot with a soft touch? A team of Michigan State University engineers has designed and developed a novel humanoid hand that may be able to help. Continue reading
#436176 We’re Making Progress in Explainable ...
Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.
But machine learning algorithms cannot explain the decisions they make.
How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions?
This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.
What Makes You Say That?
In some circumstances, you can see a road to explainable AI already. Take OpenAI’s GTP-2 model, or IBM’s Project Debater. Both of these generate text based on a large corpus of training data, and try to make it as relevant as possible to the prompt that’s given. If these models were also able to provide a quick run-down of the top few sources in that corpus of training data they were drawing information from, it may be easier to understand where the “argument” (or poetic essay about unicorns) was coming from.
This is similar to the approach Google is now looking at for its image classifiers. Many algorithms are more sensitive to textures and the relationship between adjacent pixels in an image, rather than recognizing objects by their outlines as humans do. This leads to strange results: some algorithms can happily identify a totally scrambled image of a polar bear, but not a polar bear silhouette.
Previous attempts to make image classifiers explainable relied on significance mapping. In this method, the algorithm would highlight the areas of the image that contributed the most statistical weight to making the decision. This is usually determined by changing groups of pixels in the image and seeing which contribute to the biggest change in the algorithm’s impression of what the image is. For example, if the algorithm is trying to recognize a stop sign, changing the background is unlikely to be as important as changing the sign.
Google’s new approach changes the way that its algorithm recognizes objects, by examining them at several different resolutions and searching for matches to different “sub-objects” within the main object. You or I might recognize an ambulance from its flashing lights, its tires, and its logo; we might zoom in on the basketball held by an NBA player to deduce their occupation, and so on. By linking the overall categorization of an image to these “concepts,” the algorithm can explain its decision: I categorized this as a cat because of its tail and whiskers.
Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.
Can You Explain What You Don’t Understand?
While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?
At one end of the spectrum, Good Old-Fashioned AI or GOFAI dreamed up machines that would be entirely based on symbolic logic. The machine would be hard-coded with the concept of a dog, a flower, cars, and so forth, alongside all of the symbolic “rules” which we internalize, allowing us to distinguish between dogs, flowers, and cars. (You can imagine a similar approach to a conversational AI would teach it words and strict grammatical structures from the top down, rather than “learning” languages from statistical associations between letters and words in training data, as GPT-2 broadly does.)
Such a system would be able to explain itself, because it would deal in high-level, human-understandable concepts. The equation is closer to: “ball” + “stitches” + “white” = “baseball”, rather than a set of millions of numbers linking various pathways together. There are elements of GOFAI in Google’s new approach to explaining its image recognition: the new algorithm can recognize objects based on the sub-objects they contain. To do this, it requires at least a rudimentary understanding of what those sub-objects look like, and the rules that link objects to sub-objects, such as “cats have whiskers.”
The issue, of course, is the—maybe impossible—labor-intensive task of defining all these symbolic concepts and every conceivable rule that could possibly link them together by hand. The difficulty of creating systems like this, which could handle the “combinatorial explosion” present in reality, helped to lead to the first AI winter.
Meanwhile, neural networks rely on training themselves on vast sets of data. Without the “labeling” of supervised learning, this process might bear no relation to any concepts a human could understand (and therefore be utterly inexplicable).
Somewhere between these two, hope explainable AI enthusiasts, is a happy medium that can crunch colossal amounts of data, giving us all of the benefits that recent, neural-network AI has bestowed, while showing its working in terms that humans can understand.
Image Credit: Image by Seanbatty from Pixabay Continue reading
#436174 How Selfish Are You? It Matters for ...
Our personalities impact almost everything we do, from the career path we choose to the way we interact with others to how we spend our free time.
But what about the way we drive—could personality be used to predict whether a driver will cut someone off, speed, or, say, zoom through a yellow light instead of braking?
There must be something to the idea that those of us who are more mild-mannered are likely to drive a little differently than the more assertive among us. At least, that’s what a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is betting on.
“Working with and around humans means figuring out their intentions to better understand their behavior,” said graduate student Wilko Schwarting, lead author on the paper published this week in Proceedings of the National Academy of Sciences. “People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper we sought to understand if this was something we could actually quantify.”
The team is building a model that classifies drivers according to how selfish or selfless they are, then uses that classification to help predict how drivers will behave on the road. Ideally, the system will help improve safety for self-driving cars by integrating a degree of ‘humanity’ into how their software perceives its surroundings; right now, human drivers and their cars are just another object, not much different than a tree or a sign.
But unlike trees and signs, humans have behavioral patterns and motivations. For greater success on roads that are still dominated by us mercurial humans, the CSAIL team believes, driverless cars should take our personalities into account.
How Selfish Are You?
About how important is your own well-being to you vs. the well-being of other people? It’s a hard question to answer without specifying who the other people are; your answer would likely differ if we’re talking about your friends, loved ones, strangers, or people you actively dislike.
In social psychology, social value orientation (SVO) refers to people’s preferences for allocating resources between themselves and others. The two broad categories people can fall into are pro-social (people who are more cooperative, and expect cooperation from others) and pro-self (pretty self-explanatory: “Me first!”).
Based on drivers’ behavior in two different road scenarios—merging and making a left turn—the CSAIL team’s model classified drivers as pro-social or egoistic. Slowing down to let someone merge into your lane in front of you would earn you a pro-social classification, while cutting someone off or not slowing down to allow a left turn would make you egoistic.
On the Road
The system then uses these classifications to model and predict drivers’ behavior. The team demonstrated that using their model, errors in predicting the behavior of other cars were reduced by 25 percent.
In a left-turn simulation, for example, their car would wait when an approaching car had an egoistic driver, but go ahead and make the turn when the other driver was prosocial. Similarly, if a self-driving car is trying to merge into the left lane and it’s identified the drivers in that lane as egoistic, it will assume they won’t slow down to let it in, and will wait to merge behind them. If, on the other hand, the self-driving car knows that the human drivers in the left lane are prosocial, it will attempt to merge between them since they’re likely to let it in.
So how does this all translate to better safety?
It’s essentially a starting point for imbuing driverless cars with some of the abilities and instincts that are innate to humans. If you’re driving down the highway and you see a car swerving outside its lane, you’ll probably distance yourself from that car because you know it’s more likely to cause an accident. Our senses take in information we can immediately interpret and act on, and this includes predictions about what might happen based on observations of what just happened. Our observations can clue us in to a driver’s personality (the swerver must be careless) or simply to the circumstances of a given moment (the swerver was texting).
But right now, self-driving cars assume all human drivers behave the same way, and they have no mechanism for incorporating observations about behavioral differences between drivers into their decisions.
“Creating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV’s actions,” said Schwarting.
Though it may feel a bit unsettling to think of an algorithm lumping you into a category and driving accordingly around you, maybe it’s less unsettling than thinking of self-driving cars as pre-programmed, oblivious robots unable to adapt to different driving styles.
The team’s next step is to apply their model to pedestrians, bikes, and other agents frequently found in driving environments. They also plan to look into other robotic systems acting among people, like household robots, and integrating social value orientation into their algorithms.
Image Credit: Image by Free-Photos from Pixabay Continue reading