Tag Archives: conversations

#431371 Amazon Is Quietly Building the Robots of ...

Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431159 How Close Is Turing’s Dream of ...

The quest for conversational artificial intelligence has been a long one.
When Alan Turing, the father of modern computing, racked his considerable brains for a test that would truly indicate that a computer program was intelligent, he landed on this area. If a computer could convince a panel of human judges that they were talking to a human—if it could hold a convincing conversation—then it would indicate that artificial intelligence had advanced to the point where it was indistinguishable from human intelligence.
This gauntlet was thrown down in 1950 and, so far, no computer program has managed to pass the Turing test.
There have been some very notable failures, however: Joseph Weizenbaum, as early as 1966—when computers were still programmed with large punch-cards—developed a piece of natural language processing software called ELIZA. ELIZA was a machine intended to respond to human conversation by pretending to be a psychotherapist; you can still talk to her today.
Talking to ELIZA is a little strange. She’ll often rephrase things you’ve said back at you: so, for example, if you say “I’m feeling depressed,” she might say “Did you come to me because you are feeling depressed?” When she’s unsure about what you’ve said, ELIZA will usually respond with “I see,” or perhaps “Tell me more.”
For the first few lines of dialogue, especially if you treat her as your therapist, ELIZA can be convincingly human. This was something Weizenbaum noticed and was slightly alarmed by: people were willing to treat the algorithm as more human than it really was. Before long, even though some of the test subjects knew ELIZA was just a machine, they were opening up with some of their deepest feelings and secrets. They were pouring out their hearts to a machine. When Weizenbaum’s secretary spoke to ELIZA, even though she knew it was a fairly simple computer program, she still insisted Weizenbaum leave the room.
Part of the unexpected reaction ELIZA generated may be because people are more willing to open up to a machine, feeling they won’t be judged, even if the machine is ultimately powerless to do or say anything to really help. The ELIZA effect was named for this computer program: the tendency of humans to anthropomorphize machines, or think of them as human.

Weizenbaum himself, who later became deeply suspicious of the influence of computers and artificial intelligence in human life, was astonished that people were so willing to believe his script was human. He wrote, “I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

“Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.”

The ELIZA effect may have disturbed Weizenbaum, but it has intrigued and fascinated others for decades. Perhaps you’ve noticed it in yourself, when talking to an AI like Siri, Alexa, or Google Assistant—the occasional response can seem almost too real. Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.
Yet the ELIZA effect, as enticing as it is, has proved a source of frustration for people who are trying to create conversational machines. Natural language processing has proceeded in leaps and bounds since the 1960s. Now you can find friendly chatbots like Mitsuku—which has frequently won the Loebner Prize, awarded to the machines that come closest to passing the Turing test—that aim to have a response to everything you might say.
In the commercial sphere, Facebook has opened up its Messenger program and provided software for people and companies to design their own chatbots. The idea is simple: why have an app for, say, ordering pizza when you can just chatter to a robot through your favorite messenger app and make the order in natural language, as if you were telling your friend to get it for you?
Startups like Semantic Machines hope their AI assistant will be able to interact with you just like a secretary or PA would, but with an unparalleled ability to retrieve information from the internet. They may soon be there.
But people who engineer chatbots—both in the social and commercial realm—encounter a common problem: the users, perhaps subconsciously, assume the chatbots are human and become disappointed when they’re not able to have a normal conversation. Frustration with miscommunication can often stem from raised initial expectations.
So far, no machine has really been able to crack the problem of context retention—understanding what’s been said before, referring back to it, and crafting responses based on the point the conversation has reached. Even Mitsuku will often struggle to remember the topic of conversation beyond a few lines of dialogue.

“For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until you end up with vast numbers of potential conversations.”

This is, of course, understandable. Conversation can be almost unimaginably complex. For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until—like possible games of Go or chess—you end up with vast numbers of potential conversations.
But that hasn’t deterred people from trying, most recently, tech giant Amazon, in an effort to make their AI voice assistant, Alexa, friendlier. They have been running the Alexa Prize competition, which offers a cool $500,000 to the winning AI—and a bonus of a million dollars to any team that can create a ‘socialbot’ capable of sustaining a conversation with human users for 20 minutes on a variety of themes.
Topics Alexa likes to chat about include science and technology, politics, sports, and celebrity gossip. The finalists were recently announced: chatbots from universities in Prague, Edinburgh, and Seattle. Finalists were chosen according to the ratings from Alexa users, who could trigger the socialbots into conversation by saying “Hey Alexa, let’s chat,” although the reviews for the socialbots weren’t always complimentary.
By narrowing down the fields of conversation to a specific range of topics, the Alexa Prize has cleverly started to get around the problem of context—just as commercially available chatbots hope to do. It’s much easier to model an interaction that goes a few layers into the conversational topic if you’re limiting those topics to a specific field.
Developing a machine that can hold almost any conversation with a human interlocutor convincingly might be difficult. It might even be a problem that requires artificial general intelligence to truly solve, rather than the previously-employed approaches of scripted answers or neural networks that associate inputs with responses.
But a machine that can have meaningful interactions that people might value and enjoy could be just around the corner. The Alexa Prize winner is announced in November. The ELIZA effect might mean we will relate to machines sooner than we’d thought.
So, go well, little socialbots. If you ever want to discuss the weather or what the world will be like once you guys take over, I’ll be around. Just don’t start a therapy session.
Image Credit: Shutterstock Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431142 Will Privacy Survive the Future?

Technological progress has radically transformed our concept of privacy. How we share information and display our identities has changed as we’ve migrated to the digital world.
As the Guardian states, “We now carry with us everywhere devices that give us access to all the world’s information, but they can also offer almost all the world vast quantities of information about us.” We are all leaving digital footprints as we navigate through the internet. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including governments, corporations, marketers, and criminals.
The ethical debate around privacy is complex. The reality is that our definition and standards for privacy have evolved over time, and will continue to do so in the next few decades.
Implications of Emerging Technologies
Protecting privacy will only become more challenging as we experience the emergence of technologies such as virtual reality, the Internet of Things, brain-machine interfaces, and much more.
Virtual reality headsets are already gathering information about users’ locations and physical movements. In the future all of our emotional experiences, reactions, and interactions in the virtual world will be able to be accessed and analyzed. As virtual reality becomes more immersive and indistinguishable from physical reality, technology companies will be able to gather an unprecedented amount of data.
It doesn’t end there. The Internet of Things will be able to gather live data from our homes, cities and institutions. Drones may be able to spy on us as we live our everyday lives. As the amount of genetic data gathered increases, the privacy of our genes, too, may be compromised.
It gets even more concerning when we look farther into the future. As companies like Neuralink attempt to merge the human brain with machines, we are left with powerful implications for privacy. Brain-machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface.
Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking,” which we must take extreme precaution against. However, it is important to note that lesser versions of these risks currently exist, i.e., by phone hacking, identify fraud, and the like.
A New Much-Needed Definition of Privacy
In many ways we are already cyborgs interfacing with technology. According to theories like the extended mind hypothesis, our technological devices are an extension of our identities. We use our phones to store memories, retrieve information, and communicate. We use powerful tools like the Hubble Telescope to extend our sense of sight. In parallel, one can argue that the digital world has become an extension of the physical world.
These technological tools are a part of who we are. This has led to many ethical and societal implications. Our Facebook profiles can be processed to infer secondary information about us, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality. Some argue that many of our devices may be mapping our every move. Your browsing history could be spied on and even sold in the open market.
While the argument to protect privacy and individuals’ information is valid to a certain extent, we may also have to accept the possibility that privacy will become obsolete in the future. We have inherently become more open as a society in the digital world, voluntarily sharing our identities, interests, views, and personalities.

“The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental?”

There also seems to be a contradiction with the positive trend towards mass transparency and the need to protect privacy. Many advocate for a massive decentralization and openness of information through mechanisms like blockchain.
The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental? We want to live in a world of fewer secrets, but also don’t want to live in a world where our every move is followed (not to mention our every feeling, thought and interaction). So, how do we find a balance?
Traditionally, privacy is used synonymously with secrecy. Many are led to believe that if you keep your personal information secret, then you’ve accomplished privacy. Danny Weitzner, director of the MIT Internet Policy Research Initiative, rejects this notion and argues that this old definition of privacy is dead.
From Witzner’s perspective, protecting privacy in the digital age means creating rules that require governments and businesses to be transparent about how they use our information. In other terms, we can’t bring the business of data to an end, but we can do a better job of controlling it. If these stakeholders spy on our personal information, then we should have the right to spy on how they spy on us.
The Role of Policy and Discourse
Almost always, policy has been too slow to adapt to the societal and ethical implications of technological progress. And sometimes the wrong laws can do more harm than good. For instance, in March, the US House of Representatives voted to allow internet service providers to sell your web browsing history on the open market.
More often than not, the bureaucratic nature of governance can’t keep up with exponential growth. New technologies are emerging every day and transforming society. Can we confidently claim that our world leaders, politicians, and local representatives are having these conversations and debates? Are they putting a focus on the ethical and societal implications of emerging technologies? Probably not.
We also can’t underestimate the role of public awareness and digital activism. There needs to be an emphasis on educating and engaging the general public about the complexities of these issues and the potential solutions available. The current solution may not be robust or clear, but having these discussions will get us there.
Stock Media provided by blasbike / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431058 How to Make Your First Chatbot With the ...

You’re probably wondering what Game of Thrones has to do with chatbots and artificial intelligence. Before I explain this weird connection, I need to warn you that this article may contain some serious spoilers. Continue with your reading only if you are a passionate GoT follower, who watches new episodes immediately after they come out.
Why are chatbots so important anyway?
According to the study “When Will AI Exceed Human Performance?,” researchers believe there is a 50% chance artificial intelligence could take over all human jobs by around the year 2060. This technology has already replaced dozens of customer service and sales positions and helped businesses make substantial savings.
Apart from the obvious business advantages, chatbot creation can be fun. You can create an artificial personality with a strong attitude and a unique set of traits and flaws. It’s like creating a new character for your favorite TV show. That’s why I decided to explain the most important elements of the chatbot creation process by using the TV characters we all know and love (or hate).
Why Game of Thrones?
Game of Thrones is the most popular TV show in the world. More than 10 million viewers watched the seventh season premiere, and you have probably seen internet users fanatically discussing the series’ characters, storyline, and possible endings.
Apart from writing about chatbots, I’m also a GoT fanatic, and I will base this chatbot on one of the characters from my favorite series. But before you find out the name of my bot, you should read a few lines about incredible free tools that allow us to build chatbots without coding.
Are chatbots expensive?
Today, you can create a chatbot even if you don’t know how to code. Most chatbot building platforms offer at least one free plan that allows you to use basic functionalities, create your bot, deploy it to Facebook Messenger, and analyze its performance. Free plans usually allow your bot to talk to a limited number of users.
Why should you personalize your bot?
Every platform will ask you to write a bot’s name before you start designing conversations. You will also be able to add the bot’s photograph and bio. Personalizing your bot is the only way to ensure that you will stick to the same personality and storyline throughout the building process. Users often see chatbots as people, and by giving your bot an identity, you will make sure that it doesn’t sound like it has multiple personality disorder.
I think connecting my chatbot with a GoT character will help readers understand the process of chatbot creation.
And the name of our GoT chatbot is…
…Cersei. She is mean, pragmatic, and fearless and she would do anything to stay on the Iron Throne. Many people would rather hang out with Daenerys or Jon Snow. These characters are honest, noble and good-hearted, which means their actions are often predictable.
Cersei, on the other hand, is the queen of intrigues. As the meanest and the most vengeful character in the series, she has an evil plan for everybody who steps on her toes. While viewers can easily guess where Jon and Daenerys stand, there are dozens of questions they would like to ask Cersei. But before we start talking to our bot, we need to build her personality by using the most basic elements of chatbot interaction.
Choosing the bot’s name on Botsify.
Welcome / Greeting Message
The welcome message is the greeting Cersei says to every commoner who clicks on the ‘start conversation’ button. She is not a welcoming person (ask Sansa), except if you are a banker from Braavos. Her introductory message may sound something like this:
“Dear {{user_full_name}}, My name is Cersei of the House Lannister, the First of Her Name, Queen of the Andals and the First Men, Protector of the Seven Kingdoms. You can ask me questions, and I will answer them. If the question is not worth answering, I will redirect you to Ser Gregor Clegane, who will give you a step-by-step course on how to talk to the Queen of Westeros.”
Creating the welcome message on Chatfuel
Default Message / Answer
In the bot game, users, bots, and their creators often need to learn from failed attempts and mistakes. The default message is the text Cersei will send whenever you ask her a question she doesn’t understand. Knowing Cersei, it would sound something like this:
“Ser Gregor, please escort {{user_full_name}} to the dungeon.”
Creating default message on Botsify
Menu
To avoid calling out the Mountain every time someone asks her a question, Cersei might give you a few (safe) options to choose. The best way to do this is by using a menu function. We can classify the questions people want to ask Cersei in several different categories:

Iron Throne
Relationship with Jaime — OK, this isn’t a “safe option,” get ready to get close and personal with Sir Gregor Clegane.
War plans
Euron Greyjoy

After users choose a menu item, Cersei can give them a default response on the topic or set up a plot that will make their lives miserable. Knowing Cersei, she will probably go for the second option.
Adding chatbot menu on Botsify
Stories / Blocks
This feature allows us to build a longer Cersei-to-user interaction. The structure of stories and blocks is different on every chatbot platform, but most of them use keywords and phrases for finding out the user’s intention.

Keywords — where the bot recognizes a certain keyword within the user’s reply. Users who have chosen the ‘war plans’ option might ask Cersei how is she planning to defeat Daenerys’s dragons. We can add ‘dragon’ and ‘dragons’ as keywords, and connect them with an answer that will sound something like this:

“Dragons are not invulnerable as you may think. Maester Qyburn is developing a weapon that will bring them down for good!”
Adding keywords on Chatfuel
People may also ask her about White Walkers. Do you plan to join Daenerys and Jon Snow in a fight against White Walkers? After we add ‘White Walker’ and ‘White Walkers’ on the keyword list, Cersei will answer:
“White Walkers? Do you think the Queen of Westeros has enough free time to think about creatures from fairy tales and legends?”
Adding Keywords on Botsify

Phrases — are more complex syntaxes that the bot can be trained to recognize. Many people would like to ask Cersei if she’s going to marry Euron Greyjoy after the war ends. We can add ‘Euron’ as a keyword, but then we won’t be sure what answer the user is expecting. Instead, we can use the phrase ‘(Will you) marry Euron Greyjoy (after the war?)’. Just to be sure, we should also add a few alternative phrases like ‘(Do you plan on) marrying Euron Greyjoy (after the war),’ ‘(Will you) end up with Euron Greyjoy (after the war?)’, ‘(Will) Euron Greyjoy be the new King?’ etc. Cersei would probably answer this inquiry in her style:

“Of course not, Euron is a useful idiot. I will use his fleet and send him back to the Iron Islands, where he belongs.”
Adding phrases on Botsify
Forms
We have already asked Cersei several questions, and now she would like to ask us something. She can do so by using the form/user input feature. Most tools allow us to add a question and the criteria for checking the users’ answer. If the user provides us the answer that is compliant to the predefined form (like email address, phone number, or a ZIP code), the bot will identify and extract the answer. If the answer doesn’t fit into the predefined criteria, the bot will notify the user and ask him/her to try again.
If Cersei would ask you a question, she would probably want to know your address so she could send her guards to fill your basement with barrels of wildfire.
Creating forms on Botsify
Templates
If you have problems building your first chatbot, templates can help you create the basic conversation structure. Unfortunately, not all platforms offer this feature for free. Snatchbot currently has the most comprehensive list of free templates. There you can choose a pre-built layout. The template selection ranges from simple FAQ bots to ones created for a specific industry, like banking, airline, healthcare, or e-commerce.
Choosing templates on Snatchbot
Plugins
Most tools also provide plugins that can be used for making the conversations more meaningful. These plugins allow Cersei to send images, audio and video files. She can unleash her creativity and make you suffer by sending you her favorite GoT execution videos.

With the help of integrations, Cersei can talk to you on Facebook Messenger, Telegram, WeChat, Slack, and many other communication apps. She can also sell her fan gear and ask you for donations by integrating in-bot payments from PayPal accounts. Her sales pitch will probably sound something like this:
“Gold wins wars! Would you rather invest your funds in a member of a respected family, who always pays her debts, or in the chaotic war endeavor of a crazy revolutionary, whose strength lies in three flying lizards? If your pockets are full of gold, you are already on my side. Now you can complete your checkout on PayPal.”
Chatbot building is now easier than ever, and even small businesses are starting to use the incredible benefits of artificial intelligence. If you still don’t believe that chatbots can replace customer service representatives, I suggest you try to develop a bot based on your favorite TV show, movie or book character and talk with him/her for a while. This way, you will be able to understand the concept that stands behind this amazing technology and use it to improve your business.
Now I’m off to talk to Cersei. Maybe she will feed me some Season 8 spoilers.
This article was originally published by Chatbots Magazine. Read the original post here.
Image credits for screenshots in post: Branislav Srdanovic
Banner stock media provided by new_vision_studio / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#426435 UW-Led Gaming and Robotics Project Helps ...

May 17, 2016 — When Jacqueline Leonard proposed a program that would introduce gaming and robotics into public school classes to help improve mathematics learning, the University of Wyoming College of Education professor hoped it would be a tool for students to become interested in college careers.
Three years later, the project has shown positive results among the original eight Wyoming schools that were introduced to the Innovative Technology Experiences for Students and Teachers (ITEST) program. The National Science Foundation (NSF) supported the three-year, $1.2 million grant.
The “Visualization Basics: uGame-iCompute Project” was designed to help teachers engage fifth- through ninth-graders in gaming and robotics to promote interest in science, technology, engineering and mathematics (STEM) programs.
UW’s project has engaged elementary and middle school students in at least 24 Wyoming schools since the ITEST program was first introduced in 2013. Some school districts have participated in the program since year one of the three-year project, and nearly 900 students have participated during that time.
The eight original schools participating were Arapahoe Middle School, Laramie Junior High School, Powell Middle School, University Park Elementary School (Casper), UW Lab School, Wheatland Middle School, Worland Middle School and Wyoming Indian Middle School. Since then, seven and nine school districts, respectively, have joined the program in years two and three.
“Robotics and game design were used as a hook to enhance children’s interest in STEM and STEM careers. We also were interested in developing computational thinking skills and the processes that we know students need to be successful in computer science and engineering,” Leonard says. “Finally, we wanted children to understand how mathematics, technology and communication are critical to 21st century careers.”
Leonard, UW Science and Mathematics Teaching Center director, originally put together a multidisciplinary team from the UW colleges of Education, Engineering and Applied Science, and Arts and Sciences to research a question that has been part of her research agenda for several years: Can gaming and robotics be used to teach computational thinking skills to students in culturally sensitive ways?
“I am so thankful for this program. What a great way to get students prepared for possible careers in their future. Many of the jobs that students will have after they graduate haven’t even been created yet,” says Kait Quinton, who teaches seventh-grade math at Rock Springs Junior High School. “This program helps to enhance students’ critical thinking skills in a way that is fun and interactive. They learn so quickly. It is incredible, because I feel like I teach them the foundation of robotics and game design, and they just take it and run. By the end, they are the ones teaching me.”
During the multiphase project, team members first trained teachers to develop mathematical and scientific lessons that were culturally relevant to their students. Leonard and her supporters worked with the teachers to analyze the impact on students’ overall learning. The research team also worked with participants interested in becoming peer trainers to help extend the project’s reach after the grant period ended.
Program’s Positive Results
“The data reveal that using intact classrooms at the middle school level and elementary students during after-school programs reduced student attrition and ensured broader participation of girls and underrepresented minority students,” Leonard says.
Additionally, UW researchers have observed improved student development of computational thinking skills and problem-solving skills. Leonard says, early in the project, there was a learning curve that teachers and students had to overcome to learn the programming and software.
“Overall, students learned how to make their own games, which involved formulating problems, abstraction, use of algorithms, logical thinking, analyzing and debugging, and generalizing and transfer of knowledge,” Leonard says. “They also learned to use 21st century skills as they worked in teams to solve problems and created products for self-enjoyment and competition.”
Ty Ruby, who is a fourth- and fifth-grade special education instructor at North Evanston Elementary School, says the robotics and gaming program taught his students to work together on projects. He introduced the robotics class at Clark Elementary School.
“I believe this is a great program for students. I was so impressed with how the students worked together. Their conversations about how to solve issues or problems they were having were the best,” he says. “This provides a safe environment for students to talk about ideas with programming and working together. The students reacted really well to the program. They were excited to come to school and work with their robots.”
Robotics teams compete at local competitions, and gaming teams have taken field trips to the National Center for Atmospheric Research-Wyoming Supercomputing Center in Cheyenne. Teachers accepted into the program enrolled in continuing education courses, led after-school programs, and further developed instructional skills on how to incorporate cultural uniqueness into fun science and technology projects.
The NSF-sponsored grant has ended this semester, but Leonard says her research team has actually been granted a “no-cost extension,” meaning that the project will end during September 2017. Planning for the next phase of the program is underway, she adds.
“We intend to go to more school districts and work with both elementary and middle school students,” Leonard says. “It has been a pleasure working with teachers and students in Wyoming. The excitement and energy observed in the classrooms and after-school clubs were infectious. The students loved the program and learned a great deal.”
For more information about the program, visit the website at www.ugameicompute.com/ or contact Leonard at (307) 766-3776 or jleona12@uwyo.edu.
Original of this article can be found at:
http://www.uwyo.edu/uw/news/2016/05/uw-led-gaming-and-robotics-project-helps-boost-student-math-scores.html
The post UW-Led Gaming and Robotics Project Helps Boost Student Math Scores appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment