Tag Archives: assistant
#431159 How Close Is Turing’s Dream of ...
The quest for conversational artificial intelligence has been a long one.
When Alan Turing, the father of modern computing, racked his considerable brains for a test that would truly indicate that a computer program was intelligent, he landed on this area. If a computer could convince a panel of human judges that they were talking to a human—if it could hold a convincing conversation—then it would indicate that artificial intelligence had advanced to the point where it was indistinguishable from human intelligence.
This gauntlet was thrown down in 1950 and, so far, no computer program has managed to pass the Turing test.
There have been some very notable failures, however: Joseph Weizenbaum, as early as 1966—when computers were still programmed with large punch-cards—developed a piece of natural language processing software called ELIZA. ELIZA was a machine intended to respond to human conversation by pretending to be a psychotherapist; you can still talk to her today.
Talking to ELIZA is a little strange. She’ll often rephrase things you’ve said back at you: so, for example, if you say “I’m feeling depressed,” she might say “Did you come to me because you are feeling depressed?” When she’s unsure about what you’ve said, ELIZA will usually respond with “I see,” or perhaps “Tell me more.”
For the first few lines of dialogue, especially if you treat her as your therapist, ELIZA can be convincingly human. This was something Weizenbaum noticed and was slightly alarmed by: people were willing to treat the algorithm as more human than it really was. Before long, even though some of the test subjects knew ELIZA was just a machine, they were opening up with some of their deepest feelings and secrets. They were pouring out their hearts to a machine. When Weizenbaum’s secretary spoke to ELIZA, even though she knew it was a fairly simple computer program, she still insisted Weizenbaum leave the room.
Part of the unexpected reaction ELIZA generated may be because people are more willing to open up to a machine, feeling they won’t be judged, even if the machine is ultimately powerless to do or say anything to really help. The ELIZA effect was named for this computer program: the tendency of humans to anthropomorphize machines, or think of them as human.
Weizenbaum himself, who later became deeply suspicious of the influence of computers and artificial intelligence in human life, was astonished that people were so willing to believe his script was human. He wrote, “I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
“Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.”
The ELIZA effect may have disturbed Weizenbaum, but it has intrigued and fascinated others for decades. Perhaps you’ve noticed it in yourself, when talking to an AI like Siri, Alexa, or Google Assistant—the occasional response can seem almost too real. Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.
Yet the ELIZA effect, as enticing as it is, has proved a source of frustration for people who are trying to create conversational machines. Natural language processing has proceeded in leaps and bounds since the 1960s. Now you can find friendly chatbots like Mitsuku—which has frequently won the Loebner Prize, awarded to the machines that come closest to passing the Turing test—that aim to have a response to everything you might say.
In the commercial sphere, Facebook has opened up its Messenger program and provided software for people and companies to design their own chatbots. The idea is simple: why have an app for, say, ordering pizza when you can just chatter to a robot through your favorite messenger app and make the order in natural language, as if you were telling your friend to get it for you?
Startups like Semantic Machines hope their AI assistant will be able to interact with you just like a secretary or PA would, but with an unparalleled ability to retrieve information from the internet. They may soon be there.
But people who engineer chatbots—both in the social and commercial realm—encounter a common problem: the users, perhaps subconsciously, assume the chatbots are human and become disappointed when they’re not able to have a normal conversation. Frustration with miscommunication can often stem from raised initial expectations.
So far, no machine has really been able to crack the problem of context retention—understanding what’s been said before, referring back to it, and crafting responses based on the point the conversation has reached. Even Mitsuku will often struggle to remember the topic of conversation beyond a few lines of dialogue.
“For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until you end up with vast numbers of potential conversations.”
This is, of course, understandable. Conversation can be almost unimaginably complex. For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until—like possible games of Go or chess—you end up with vast numbers of potential conversations.
But that hasn’t deterred people from trying, most recently, tech giant Amazon, in an effort to make their AI voice assistant, Alexa, friendlier. They have been running the Alexa Prize competition, which offers a cool $500,000 to the winning AI—and a bonus of a million dollars to any team that can create a ‘socialbot’ capable of sustaining a conversation with human users for 20 minutes on a variety of themes.
Topics Alexa likes to chat about include science and technology, politics, sports, and celebrity gossip. The finalists were recently announced: chatbots from universities in Prague, Edinburgh, and Seattle. Finalists were chosen according to the ratings from Alexa users, who could trigger the socialbots into conversation by saying “Hey Alexa, let’s chat,” although the reviews for the socialbots weren’t always complimentary.
By narrowing down the fields of conversation to a specific range of topics, the Alexa Prize has cleverly started to get around the problem of context—just as commercially available chatbots hope to do. It’s much easier to model an interaction that goes a few layers into the conversational topic if you’re limiting those topics to a specific field.
Developing a machine that can hold almost any conversation with a human interlocutor convincingly might be difficult. It might even be a problem that requires artificial general intelligence to truly solve, rather than the previously-employed approaches of scripted answers or neural networks that associate inputs with responses.
But a machine that can have meaningful interactions that people might value and enjoy could be just around the corner. The Alexa Prize winner is announced in November. The ELIZA effect might mean we will relate to machines sooner than we’d thought.
So, go well, little socialbots. If you ever want to discuss the weather or what the world will be like once you guys take over, I’ll be around. Just don’t start a therapy session.
Image Credit: Shutterstock Continue reading
#431158 This AI Assistant Helps Demystify ...
In an interview at Singularity University’s Global Summit in San Francisco, Anita Schjøll Brede talked about how artificial intelligence can help make scientific research accessible to anyone working on a complex problem.
Anita Schjøll Brede is the CEO and co-founder of Iris AI, a startup that’s building an artificially intelligent research assistant, which was recently named one of the most innovative AI startups of 2017 by Fast Company. Schjøll Brede is also faculty at Singularity University Denmark and a 2015 alumni of the Global Solutions Program.
“Ultimately, we’re building an AI that can read, understand, and connect the dots,” Schjøll Brede said. “But zooming that back into today, we’re building a tool for R&D, research institutions, and entrepreneurs who have big hairy problems to solve and need to apply research and science to solve them. We’re semi-automating the process of mapping out what you should read to solve the problem or to see what research you need to do to solve the problem.”
Watch the interview for more on Iris AI’s technology and to hear Schjøll Brede’s take on whether AI researchers share a moral responsibility for the systems they build.
Image Credit: foxaon1987 / Shutterstock.com Continue reading
#430855 Why Education Is the Hardest Sector of ...
We’ve all heard the warning cries: automation will disrupt entire industries and put millions of people out of jobs. In fact, up to 45 percent of existing jobs can be automated using current technology.
However, this may not necessarily apply to the education sector. After a detailed analysis of more than 2,000-plus work activities for more than 800 occupations, a report by McKinsey & Co states that of all the sectors examined, “…the technical feasibility of automation is lowest in education.”
There is no doubt that technological trends will have a powerful impact on global education, both by improving the overall learning experience and by increasing global access to education. Massive open online courses (MOOCs), chatbot tutors, and AI-powered lesson plans are just a few examples of the digital transformation in global education. But will robots and artificial intelligence ever fully replace teachers?
The Most Difficult Sector to Automate
While various tasks revolving around education—like administrative tasks or facilities maintenance—are open to automation, teaching itself is not.
Effective education involves more than just transfer of information from a teacher to a student. Good teaching requires complex social interactions and adaptation to the individual student’s learning needs. An effective teacher is not just responsive to each student’s strengths and weaknesses, but is also empathetic towards the student’s state of mind. It’s about maximizing human potential.
Furthermore, students don’t just rely on effective teachers to teach them the course material, but also as a source of life guidance and career mentorship. Deep and meaningful human interaction is crucial and is something that is very difficult, if not impossible, to automate.
Automating teaching is an example of a task that would require artificial general intelligence (as opposed to narrow or specific intelligence). In other words, this is the kind of task that would require an AI that understands natural human language, can be empathetic towards emotions, plan, strategize and make impactful decisions under unpredictable circumstances.
This would be the kind of machine that can do anything a human can do, and it doesn’t exist—at least, not yet.
We’re Getting There
Let’s not forget how quickly AI is evolving. Just because it’s difficult to fully automate teaching, it doesn’t mean the world’s leading AI experts aren’t trying.
Meet Jill Watson, the teaching assistant from Georgia Institute of Technology. Watson isn’t your average TA. She’s an IBM-powered artificial intelligence that is being implemented in universities around the world. Watson is able to answer students’ questions with 97 percent certainty.
Technologies like this also have applications in grading and providing feedback. Some AI algorithms are being trained and refined to perform automatic essay scoring. One project has achieved a 0.945 correlation with human graders.
All of this will have a remarkable impact on online education as we know it and dramatically increase online student retention rates.
Any student with a smartphone can access a wealth of information and free courses from universities around the world. MOOCs have allowed valuable courses to become available to millions of students. But at the moment, not all participants can receive customized feedback for their work. Currently, this is limited by manpower, but in the future that may not be the case.
What chatbots like Jill Watson allow is the opportunity for hundreds of thousands, if not millions, of students to have their work reviewed and all their questions answered at a minimal cost.
AI algorithms also have a significant role to play in personalization of education. Every student is unique and has a different set of strengths and weaknesses. Data analysis can be used to improve individual student results, assess each student’s strengths and weaknesses, and create mass-customized programs. Algorithms can analyze student data and consequently make flexible programs that adapt to the learner based on real-time feedback. According to the McKinsey Global Institute, all of this data in education could unlock between $900 billion and $1.2 trillion in global economic value.
Beyond Automated Teaching
It’s important to recognize that technological automation alone won’t fix the many issues in our global education system today. Dominated by outdated curricula, standardized tests, and an emphasis on short-term knowledge, many experts are calling for a transformation of how we teach.
It is not enough to simply automate the process. We can have a completely digital learning experience that continues to focus on outdated skills and fails to prepare students for the future. In other words, we must not only be innovative with our automation capabilities, but also with educational content, strategy, and policies.
Are we equipping students with the most important survival skills? Are we inspiring young minds to create a better future? Are we meeting the unique learning needs of each and every student? There’s no point automating and digitizing a system that is already flawed. We need to ensure the system that is being digitized is itself being transformed for the better.
Stock Media provided by davincidig / Pond5 Continue reading
#430761 How Robots Are Getting Better at Making ...
The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading