Tag Archives: computers
#431159 How Close Is Turing’s Dream of ...
The quest for conversational artificial intelligence has been a long one.
When Alan Turing, the father of modern computing, racked his considerable brains for a test that would truly indicate that a computer program was intelligent, he landed on this area. If a computer could convince a panel of human judges that they were talking to a human—if it could hold a convincing conversation—then it would indicate that artificial intelligence had advanced to the point where it was indistinguishable from human intelligence.
This gauntlet was thrown down in 1950 and, so far, no computer program has managed to pass the Turing test.
There have been some very notable failures, however: Joseph Weizenbaum, as early as 1966—when computers were still programmed with large punch-cards—developed a piece of natural language processing software called ELIZA. ELIZA was a machine intended to respond to human conversation by pretending to be a psychotherapist; you can still talk to her today.
Talking to ELIZA is a little strange. She’ll often rephrase things you’ve said back at you: so, for example, if you say “I’m feeling depressed,” she might say “Did you come to me because you are feeling depressed?” When she’s unsure about what you’ve said, ELIZA will usually respond with “I see,” or perhaps “Tell me more.”
For the first few lines of dialogue, especially if you treat her as your therapist, ELIZA can be convincingly human. This was something Weizenbaum noticed and was slightly alarmed by: people were willing to treat the algorithm as more human than it really was. Before long, even though some of the test subjects knew ELIZA was just a machine, they were opening up with some of their deepest feelings and secrets. They were pouring out their hearts to a machine. When Weizenbaum’s secretary spoke to ELIZA, even though she knew it was a fairly simple computer program, she still insisted Weizenbaum leave the room.
Part of the unexpected reaction ELIZA generated may be because people are more willing to open up to a machine, feeling they won’t be judged, even if the machine is ultimately powerless to do or say anything to really help. The ELIZA effect was named for this computer program: the tendency of humans to anthropomorphize machines, or think of them as human.
Weizenbaum himself, who later became deeply suspicious of the influence of computers and artificial intelligence in human life, was astonished that people were so willing to believe his script was human. He wrote, “I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
“Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.”
The ELIZA effect may have disturbed Weizenbaum, but it has intrigued and fascinated others for decades. Perhaps you’ve noticed it in yourself, when talking to an AI like Siri, Alexa, or Google Assistant—the occasional response can seem almost too real. Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.
Yet the ELIZA effect, as enticing as it is, has proved a source of frustration for people who are trying to create conversational machines. Natural language processing has proceeded in leaps and bounds since the 1960s. Now you can find friendly chatbots like Mitsuku—which has frequently won the Loebner Prize, awarded to the machines that come closest to passing the Turing test—that aim to have a response to everything you might say.
In the commercial sphere, Facebook has opened up its Messenger program and provided software for people and companies to design their own chatbots. The idea is simple: why have an app for, say, ordering pizza when you can just chatter to a robot through your favorite messenger app and make the order in natural language, as if you were telling your friend to get it for you?
Startups like Semantic Machines hope their AI assistant will be able to interact with you just like a secretary or PA would, but with an unparalleled ability to retrieve information from the internet. They may soon be there.
But people who engineer chatbots—both in the social and commercial realm—encounter a common problem: the users, perhaps subconsciously, assume the chatbots are human and become disappointed when they’re not able to have a normal conversation. Frustration with miscommunication can often stem from raised initial expectations.
So far, no machine has really been able to crack the problem of context retention—understanding what’s been said before, referring back to it, and crafting responses based on the point the conversation has reached. Even Mitsuku will often struggle to remember the topic of conversation beyond a few lines of dialogue.
“For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until you end up with vast numbers of potential conversations.”
This is, of course, understandable. Conversation can be almost unimaginably complex. For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until—like possible games of Go or chess—you end up with vast numbers of potential conversations.
But that hasn’t deterred people from trying, most recently, tech giant Amazon, in an effort to make their AI voice assistant, Alexa, friendlier. They have been running the Alexa Prize competition, which offers a cool $500,000 to the winning AI—and a bonus of a million dollars to any team that can create a ‘socialbot’ capable of sustaining a conversation with human users for 20 minutes on a variety of themes.
Topics Alexa likes to chat about include science and technology, politics, sports, and celebrity gossip. The finalists were recently announced: chatbots from universities in Prague, Edinburgh, and Seattle. Finalists were chosen according to the ratings from Alexa users, who could trigger the socialbots into conversation by saying “Hey Alexa, let’s chat,” although the reviews for the socialbots weren’t always complimentary.
By narrowing down the fields of conversation to a specific range of topics, the Alexa Prize has cleverly started to get around the problem of context—just as commercially available chatbots hope to do. It’s much easier to model an interaction that goes a few layers into the conversational topic if you’re limiting those topics to a specific field.
Developing a machine that can hold almost any conversation with a human interlocutor convincingly might be difficult. It might even be a problem that requires artificial general intelligence to truly solve, rather than the previously-employed approaches of scripted answers or neural networks that associate inputs with responses.
But a machine that can have meaningful interactions that people might value and enjoy could be just around the corner. The Alexa Prize winner is announced in November. The ELIZA effect might mean we will relate to machines sooner than we’d thought.
So, go well, little socialbots. If you ever want to discuss the weather or what the world will be like once you guys take over, I’ll be around. Just don’t start a therapy session.
Image Credit: Shutterstock Continue reading
#431155 What It Will Take for Quantum Computers ...
Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading
#431081 How the Intelligent Home of the Future ...
As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”
Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading
#431022 Robots and AI Will Take Over These 3 ...
We’re no stranger to robotics in the medical field. Robot-assisted surgery is becoming more and more common. Many training programs are starting to include robotic and virtual reality scenarios to provide hands-on training for students without putting patients at risk.
With all of these advances in medical robotics, three niches stand out above the rest: surgery, medical imaging, and drug discovery. How have robotics already begun to exert their influence on these practices, and how will they change them for good?
Robot-Assisted Surgery
Robot-assisted surgery was first documented in 1985, when it was used for a neurosurgical biopsy. This led to the use of robotics in a number of similar surgeries, both laparoscopic and traditional operations. The FDA didn’t approve robotic surgery tools until 2000, when the da Vinci Surgery system hit the market.
The robot-assisted surgery market is expected to grow steadily into 2023 and potentially beyond. The only thing that might stand in the way of this growth is the cost of the equipment. The initial investment may prevent small practices from purchasing the necessary devices.
Medical Imaging
The key to successful medical imaging isn’t the equipment itself. It’s being able to interpret the information in the images. Medical images are some of the most information-dense pieces of data in the medical field and can reveal so much more than a basic visual inspection can.
Robotics and, more specifically, artificial intelligence programs like IBM Watson can help interpret these images more efficiently and accurately. By allowing an AI or basic machine learning program to study the medical images, researchers can find patterns and make more accurate diagnoses than ever before.
Drug Discovery
Drug discovery is a long and often tedious process that includes years of testing and assessment. Artificial intelligence, machine learning and predictive algorithms could help speed up this system.
Imagine if researchers could input the kind of medicine they’re trying to make and the kind of symptoms they’re trying to treat into a computer and let it do the rest. With robotics, that may someday be possible.
This isn’t a perfect solution yet—these systems require massive amounts of data before they can start making decisions or predictions. By feeding data into the cloud where these programs can access it, researchers can take the first steps towards setting up a functional database.
Another benefit of these AI programs is that they might see connections humans would never have thought of. People can make those leaps, but the chances are much lower, and it takes much longer if it happens at all. Simply put, we’re not capable of processing the sheer amount of data that computers can process.
This isn’t a field where we’re worrying about robots stealing jobs.
Quite the opposite, in fact—we want robots to become commonly-used tools that can help improve patient care and surgical outcomes.
A human surgeon might have intuition, but they’ll never have the steadiness that a pair of robotic hands can provide or the data-processing capabilities of a machine learning algorithm. If we let them, these tools could change the way we look at medicine.
Image Credit: Intuitive Surgical Continue reading
#430868 These 7 Forces Are Changing the World at ...
It was the Greek philosopher Heraclitus who first said, “The only thing that is constant is change.”
He was onto something. But even he would likely be left speechless at the scale and pace of change the world has experienced in the past 100 years—not to mention the past 10.
Since 1917, the global population has gone from 1.9 billion people to 7.5 billion. Life expectancy has more than doubled in many developing countries and risen significantly in developed countries. In 1917 only eight percent of homes had phones—in the form of landline telephones—while today more than seven in 10 Americans own a smartphone—aka, a supercomputer that fits in their pockets.
And things aren’t going to slow down anytime soon. In a talk at Singularity University’s Global Summit this week in San Francisco, SU cofounder and chairman Peter Diamandis told the audience, “Tomorrow’s speed of change will make today look like we’re crawling.” He then shared his point of view about some of the most important factors driving this accelerating change.
Peter Diamandis at Singularity University’s Global Summit in San Francisco.
Computation
In 1965, Gordon Moore (cofounder of Intel) predicted computer chips would double in power and halve in cost every 18 to 24 months. What became known as Moore’s Law turned out to be accurate, and today affordable computer chips contain a billion or more transistors spaced just nanometers apart.
That means computers can do exponentially more calculations per second than they could thirty, twenty, or ten years ago—and at a dramatically lower cost. This in turn means we can generate a lot more information, and use computers for all kinds of applications they wouldn’t have been able to handle in the past (like diagnosing rare forms of cancer, for example).
Convergence
Increased computing power is the basis for a myriad of technological advances, which themselves are converging in ways we couldn’t have imagined a couple decades ago. As new technologies advance, the interactions between various subsets of those technologies create new opportunities that accelerate the pace of change much more than any single technology can on its own.
A breakthrough in biotechnology, for example, might spring from a crucial development in artificial intelligence. An advance in solar energy could come about by applying concepts from nanotechnology.
Interface Moments
Technology is becoming more accessible even to the most non-techy among us. The internet was once the domain of scientists and coders, but these days anyone can make their own web page, and browsers make those pages easily searchable. Now, interfaces are opening up areas like robotics or 3D printing.
As Diamandis put it, “You don’t need to know how to code to 3D print an attachment for your phone. We’re going from mind to materialization, from intentionality to implication.”
Artificial intelligence is what Diamandis calls “the ultimate interface moment,” enabling everyone who can speak their mind to connect and leverage exponential technologies.
Connectivity
Today there are about three billion people around the world connected to the internet—that’s up from 1.8 billion in 2010. But projections show that by 2025 there will be eight billion people connected. This is thanks to a race between tech billionaires to wrap the Earth in internet; Elon Musk’s SpaceX has plans to launch a network of 4,425 satellites to get the job done, while Google’s Project Loon is using giant polyethylene balloons for the task.
These projects will enable five billion new minds to come online, and those minds will have access to exponential technologies via interface moments.
Sensors
Diamandis predicts that after we establish a 5G network with speeds of 10–100 Gbps, a proliferation of sensors will follow, to the point that there’ll be around 100,000 sensors per city block. These sensors will be equipped with the most advanced AI, and the combination of these two will yield an incredible amount of knowledge.
“By 2030 we’re heading towards 100 trillion sensors,” Diamandis said. “We’re heading towards a world in which we’re going to be able to know anything we want, anywhere we want, anytime we want.” He added that tens of thousands of drones will hover over every major city.
Intelligence
“If you think there’s an arms race going on for AI, there’s also one for HI—human intelligence,” Diamandis said. He explained that if a genius was born in a remote village 100 years ago, he or she would likely not have been able to gain access to the resources needed to put his or her gifts to widely productive use. But that’s about to change.
Private companies as well as military programs are working on brain-machine interfaces, with the ultimate aim of uploading the human mind. The focus in the future will be on increasing intelligence of individuals as well as companies and even countries.
Wealth Concentration
A final crucial factor driving mass acceleration is the increase in wealth concentration. “We’re living in a time when there’s more wealth in the hands of private individuals, and they’re willing to take bigger risks than ever before,” Diamandis said. Billionaires like Mark Zuckerberg, Jeff Bezos, Elon Musk, and Bill Gates are putting millions of dollars towards philanthropic causes that will benefit not only themselves, but humanity at large.
What It All Means
One of the biggest implications of the rate at which the world is changing, Diamandis said, is that the cost of everything is trending towards zero. We are heading towards abundance, and the evidence lies in the reduction of extreme poverty we’ve already seen and will continue to see at an even more rapid rate.
Listening to Diamandis’ optimism, it’s hard not to find it contagious.
“The world is becoming better at an extraordinary rate,” he said, pointing out the rises in literacy, democracy, vaccinations, and life expectancy, and the concurrent decreases in child mortality, birth rate, and poverty.
“We’re alive during a pivotal time in human history,” he concluded. “There is nothing we don’t have access to.”
Stock Media provided by seanpavonephoto / Pond5 Continue reading