Tag Archives: people

#431159 How Close Is Turing’s Dream of ...

The quest for conversational artificial intelligence has been a long one.
When Alan Turing, the father of modern computing, racked his considerable brains for a test that would truly indicate that a computer program was intelligent, he landed on this area. If a computer could convince a panel of human judges that they were talking to a human—if it could hold a convincing conversation—then it would indicate that artificial intelligence had advanced to the point where it was indistinguishable from human intelligence.
This gauntlet was thrown down in 1950 and, so far, no computer program has managed to pass the Turing test.
There have been some very notable failures, however: Joseph Weizenbaum, as early as 1966—when computers were still programmed with large punch-cards—developed a piece of natural language processing software called ELIZA. ELIZA was a machine intended to respond to human conversation by pretending to be a psychotherapist; you can still talk to her today.
Talking to ELIZA is a little strange. She’ll often rephrase things you’ve said back at you: so, for example, if you say “I’m feeling depressed,” she might say “Did you come to me because you are feeling depressed?” When she’s unsure about what you’ve said, ELIZA will usually respond with “I see,” or perhaps “Tell me more.”
For the first few lines of dialogue, especially if you treat her as your therapist, ELIZA can be convincingly human. This was something Weizenbaum noticed and was slightly alarmed by: people were willing to treat the algorithm as more human than it really was. Before long, even though some of the test subjects knew ELIZA was just a machine, they were opening up with some of their deepest feelings and secrets. They were pouring out their hearts to a machine. When Weizenbaum’s secretary spoke to ELIZA, even though she knew it was a fairly simple computer program, she still insisted Weizenbaum leave the room.
Part of the unexpected reaction ELIZA generated may be because people are more willing to open up to a machine, feeling they won’t be judged, even if the machine is ultimately powerless to do or say anything to really help. The ELIZA effect was named for this computer program: the tendency of humans to anthropomorphize machines, or think of them as human.

Weizenbaum himself, who later became deeply suspicious of the influence of computers and artificial intelligence in human life, was astonished that people were so willing to believe his script was human. He wrote, “I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

“Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.”

The ELIZA effect may have disturbed Weizenbaum, but it has intrigued and fascinated others for decades. Perhaps you’ve noticed it in yourself, when talking to an AI like Siri, Alexa, or Google Assistant—the occasional response can seem almost too real. Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.
Yet the ELIZA effect, as enticing as it is, has proved a source of frustration for people who are trying to create conversational machines. Natural language processing has proceeded in leaps and bounds since the 1960s. Now you can find friendly chatbots like Mitsuku—which has frequently won the Loebner Prize, awarded to the machines that come closest to passing the Turing test—that aim to have a response to everything you might say.
In the commercial sphere, Facebook has opened up its Messenger program and provided software for people and companies to design their own chatbots. The idea is simple: why have an app for, say, ordering pizza when you can just chatter to a robot through your favorite messenger app and make the order in natural language, as if you were telling your friend to get it for you?
Startups like Semantic Machines hope their AI assistant will be able to interact with you just like a secretary or PA would, but with an unparalleled ability to retrieve information from the internet. They may soon be there.
But people who engineer chatbots—both in the social and commercial realm—encounter a common problem: the users, perhaps subconsciously, assume the chatbots are human and become disappointed when they’re not able to have a normal conversation. Frustration with miscommunication can often stem from raised initial expectations.
So far, no machine has really been able to crack the problem of context retention—understanding what’s been said before, referring back to it, and crafting responses based on the point the conversation has reached. Even Mitsuku will often struggle to remember the topic of conversation beyond a few lines of dialogue.

“For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until you end up with vast numbers of potential conversations.”

This is, of course, understandable. Conversation can be almost unimaginably complex. For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until—like possible games of Go or chess—you end up with vast numbers of potential conversations.
But that hasn’t deterred people from trying, most recently, tech giant Amazon, in an effort to make their AI voice assistant, Alexa, friendlier. They have been running the Alexa Prize competition, which offers a cool $500,000 to the winning AI—and a bonus of a million dollars to any team that can create a ‘socialbot’ capable of sustaining a conversation with human users for 20 minutes on a variety of themes.
Topics Alexa likes to chat about include science and technology, politics, sports, and celebrity gossip. The finalists were recently announced: chatbots from universities in Prague, Edinburgh, and Seattle. Finalists were chosen according to the ratings from Alexa users, who could trigger the socialbots into conversation by saying “Hey Alexa, let’s chat,” although the reviews for the socialbots weren’t always complimentary.
By narrowing down the fields of conversation to a specific range of topics, the Alexa Prize has cleverly started to get around the problem of context—just as commercially available chatbots hope to do. It’s much easier to model an interaction that goes a few layers into the conversational topic if you’re limiting those topics to a specific field.
Developing a machine that can hold almost any conversation with a human interlocutor convincingly might be difficult. It might even be a problem that requires artificial general intelligence to truly solve, rather than the previously-employed approaches of scripted answers or neural networks that associate inputs with responses.
But a machine that can have meaningful interactions that people might value and enjoy could be just around the corner. The Alexa Prize winner is announced in November. The ELIZA effect might mean we will relate to machines sooner than we’d thought.
So, go well, little socialbots. If you ever want to discuss the weather or what the world will be like once you guys take over, I’ll be around. Just don’t start a therapy session.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#431155 What It Will Take for Quantum Computers ...

Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading

Posted in Human Robots

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots

#431058 How to Make Your First Chatbot With the ...

You’re probably wondering what Game of Thrones has to do with chatbots and artificial intelligence. Before I explain this weird connection, I need to warn you that this article may contain some serious spoilers. Continue with your reading only if you are a passionate GoT follower, who watches new episodes immediately after they come out.
Why are chatbots so important anyway?
According to the study “When Will AI Exceed Human Performance?,” researchers believe there is a 50% chance artificial intelligence could take over all human jobs by around the year 2060. This technology has already replaced dozens of customer service and sales positions and helped businesses make substantial savings.
Apart from the obvious business advantages, chatbot creation can be fun. You can create an artificial personality with a strong attitude and a unique set of traits and flaws. It’s like creating a new character for your favorite TV show. That’s why I decided to explain the most important elements of the chatbot creation process by using the TV characters we all know and love (or hate).
Why Game of Thrones?
Game of Thrones is the most popular TV show in the world. More than 10 million viewers watched the seventh season premiere, and you have probably seen internet users fanatically discussing the series’ characters, storyline, and possible endings.
Apart from writing about chatbots, I’m also a GoT fanatic, and I will base this chatbot on one of the characters from my favorite series. But before you find out the name of my bot, you should read a few lines about incredible free tools that allow us to build chatbots without coding.
Are chatbots expensive?
Today, you can create a chatbot even if you don’t know how to code. Most chatbot building platforms offer at least one free plan that allows you to use basic functionalities, create your bot, deploy it to Facebook Messenger, and analyze its performance. Free plans usually allow your bot to talk to a limited number of users.
Why should you personalize your bot?
Every platform will ask you to write a bot’s name before you start designing conversations. You will also be able to add the bot’s photograph and bio. Personalizing your bot is the only way to ensure that you will stick to the same personality and storyline throughout the building process. Users often see chatbots as people, and by giving your bot an identity, you will make sure that it doesn’t sound like it has multiple personality disorder.
I think connecting my chatbot with a GoT character will help readers understand the process of chatbot creation.
And the name of our GoT chatbot is…
…Cersei. She is mean, pragmatic, and fearless and she would do anything to stay on the Iron Throne. Many people would rather hang out with Daenerys or Jon Snow. These characters are honest, noble and good-hearted, which means their actions are often predictable.
Cersei, on the other hand, is the queen of intrigues. As the meanest and the most vengeful character in the series, she has an evil plan for everybody who steps on her toes. While viewers can easily guess where Jon and Daenerys stand, there are dozens of questions they would like to ask Cersei. But before we start talking to our bot, we need to build her personality by using the most basic elements of chatbot interaction.
Choosing the bot’s name on Botsify.
Welcome / Greeting Message
The welcome message is the greeting Cersei says to every commoner who clicks on the ‘start conversation’ button. She is not a welcoming person (ask Sansa), except if you are a banker from Braavos. Her introductory message may sound something like this:
“Dear {{user_full_name}}, My name is Cersei of the House Lannister, the First of Her Name, Queen of the Andals and the First Men, Protector of the Seven Kingdoms. You can ask me questions, and I will answer them. If the question is not worth answering, I will redirect you to Ser Gregor Clegane, who will give you a step-by-step course on how to talk to the Queen of Westeros.”
Creating the welcome message on Chatfuel
Default Message / Answer
In the bot game, users, bots, and their creators often need to learn from failed attempts and mistakes. The default message is the text Cersei will send whenever you ask her a question she doesn’t understand. Knowing Cersei, it would sound something like this:
“Ser Gregor, please escort {{user_full_name}} to the dungeon.”
Creating default message on Botsify
Menu
To avoid calling out the Mountain every time someone asks her a question, Cersei might give you a few (safe) options to choose. The best way to do this is by using a menu function. We can classify the questions people want to ask Cersei in several different categories:

Iron Throne
Relationship with Jaime — OK, this isn’t a “safe option,” get ready to get close and personal with Sir Gregor Clegane.
War plans
Euron Greyjoy

After users choose a menu item, Cersei can give them a default response on the topic or set up a plot that will make their lives miserable. Knowing Cersei, she will probably go for the second option.
Adding chatbot menu on Botsify
Stories / Blocks
This feature allows us to build a longer Cersei-to-user interaction. The structure of stories and blocks is different on every chatbot platform, but most of them use keywords and phrases for finding out the user’s intention.

Keywords — where the bot recognizes a certain keyword within the user’s reply. Users who have chosen the ‘war plans’ option might ask Cersei how is she planning to defeat Daenerys’s dragons. We can add ‘dragon’ and ‘dragons’ as keywords, and connect them with an answer that will sound something like this:

“Dragons are not invulnerable as you may think. Maester Qyburn is developing a weapon that will bring them down for good!”
Adding keywords on Chatfuel
People may also ask her about White Walkers. Do you plan to join Daenerys and Jon Snow in a fight against White Walkers? After we add ‘White Walker’ and ‘White Walkers’ on the keyword list, Cersei will answer:
“White Walkers? Do you think the Queen of Westeros has enough free time to think about creatures from fairy tales and legends?”
Adding Keywords on Botsify

Phrases — are more complex syntaxes that the bot can be trained to recognize. Many people would like to ask Cersei if she’s going to marry Euron Greyjoy after the war ends. We can add ‘Euron’ as a keyword, but then we won’t be sure what answer the user is expecting. Instead, we can use the phrase ‘(Will you) marry Euron Greyjoy (after the war?)’. Just to be sure, we should also add a few alternative phrases like ‘(Do you plan on) marrying Euron Greyjoy (after the war),’ ‘(Will you) end up with Euron Greyjoy (after the war?)’, ‘(Will) Euron Greyjoy be the new King?’ etc. Cersei would probably answer this inquiry in her style:

“Of course not, Euron is a useful idiot. I will use his fleet and send him back to the Iron Islands, where he belongs.”
Adding phrases on Botsify
Forms
We have already asked Cersei several questions, and now she would like to ask us something. She can do so by using the form/user input feature. Most tools allow us to add a question and the criteria for checking the users’ answer. If the user provides us the answer that is compliant to the predefined form (like email address, phone number, or a ZIP code), the bot will identify and extract the answer. If the answer doesn’t fit into the predefined criteria, the bot will notify the user and ask him/her to try again.
If Cersei would ask you a question, she would probably want to know your address so she could send her guards to fill your basement with barrels of wildfire.
Creating forms on Botsify
Templates
If you have problems building your first chatbot, templates can help you create the basic conversation structure. Unfortunately, not all platforms offer this feature for free. Snatchbot currently has the most comprehensive list of free templates. There you can choose a pre-built layout. The template selection ranges from simple FAQ bots to ones created for a specific industry, like banking, airline, healthcare, or e-commerce.
Choosing templates on Snatchbot
Plugins
Most tools also provide plugins that can be used for making the conversations more meaningful. These plugins allow Cersei to send images, audio and video files. She can unleash her creativity and make you suffer by sending you her favorite GoT execution videos.

With the help of integrations, Cersei can talk to you on Facebook Messenger, Telegram, WeChat, Slack, and many other communication apps. She can also sell her fan gear and ask you for donations by integrating in-bot payments from PayPal accounts. Her sales pitch will probably sound something like this:
“Gold wins wars! Would you rather invest your funds in a member of a respected family, who always pays her debts, or in the chaotic war endeavor of a crazy revolutionary, whose strength lies in three flying lizards? If your pockets are full of gold, you are already on my side. Now you can complete your checkout on PayPal.”
Chatbot building is now easier than ever, and even small businesses are starting to use the incredible benefits of artificial intelligence. If you still don’t believe that chatbots can replace customer service representatives, I suggest you try to develop a bot based on your favorite TV show, movie or book character and talk with him/her for a while. This way, you will be able to understand the concept that stands behind this amazing technology and use it to improve your business.
Now I’m off to talk to Cersei. Maybe she will feed me some Season 8 spoilers.
This article was originally published by Chatbots Magazine. Read the original post here.
Image credits for screenshots in post: Branislav Srdanovic
Banner stock media provided by new_vision_studio / Pond5 Continue reading

Posted in Human Robots

#431022 Robots and AI Will Take Over These 3 ...

We’re no stranger to robotics in the medical field. Robot-assisted surgery is becoming more and more common. Many training programs are starting to include robotic and virtual reality scenarios to provide hands-on training for students without putting patients at risk.
With all of these advances in medical robotics, three niches stand out above the rest: surgery, medical imaging, and drug discovery. How have robotics already begun to exert their influence on these practices, and how will they change them for good?
Robot-Assisted Surgery
Robot-assisted surgery was first documented in 1985, when it was used for a neurosurgical biopsy. This led to the use of robotics in a number of similar surgeries, both laparoscopic and traditional operations. The FDA didn’t approve robotic surgery tools until 2000, when the da Vinci Surgery system hit the market.
The robot-assisted surgery market is expected to grow steadily into 2023 and potentially beyond. The only thing that might stand in the way of this growth is the cost of the equipment. The initial investment may prevent small practices from purchasing the necessary devices.
Medical Imaging
The key to successful medical imaging isn’t the equipment itself. It’s being able to interpret the information in the images. Medical images are some of the most information-dense pieces of data in the medical field and can reveal so much more than a basic visual inspection can.
Robotics and, more specifically, artificial intelligence programs like IBM Watson can help interpret these images more efficiently and accurately. By allowing an AI or basic machine learning program to study the medical images, researchers can find patterns and make more accurate diagnoses than ever before.
Drug Discovery
Drug discovery is a long and often tedious process that includes years of testing and assessment. Artificial intelligence, machine learning and predictive algorithms could help speed up this system.
Imagine if researchers could input the kind of medicine they’re trying to make and the kind of symptoms they’re trying to treat into a computer and let it do the rest. With robotics, that may someday be possible.

This isn’t a perfect solution yet—these systems require massive amounts of data before they can start making decisions or predictions. By feeding data into the cloud where these programs can access it, researchers can take the first steps towards setting up a functional database.
Another benefit of these AI programs is that they might see connections humans would never have thought of. People can make those leaps, but the chances are much lower, and it takes much longer if it happens at all. Simply put, we’re not capable of processing the sheer amount of data that computers can process.
This isn’t a field where we’re worrying about robots stealing jobs.
Quite the opposite, in fact—we want robots to become commonly-used tools that can help improve patient care and surgical outcomes.
A human surgeon might have intuition, but they’ll never have the steadiness that a pair of robotic hands can provide or the data-processing capabilities of a machine learning algorithm. If we let them, these tools could change the way we look at medicine.
Image Credit: Intuitive Surgical Continue reading

Posted in Human Robots