Tag Archives: mathematics
#437982 Superintelligent AI May Be Impossible to ...
It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created.
Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.
Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. “The question about whether superintelligence could be controlled if created is quite old,” says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. “It goes back at least to Asimov’s First Law of Robotics, in the 1940s.”
The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story “Runaround,” are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us but also investigated potential control strategies for such a machine—and the reasons they might not work.
Bostrom outlined two possible types of solutions of this “control problem.” One is to control what the AI can do, such as keeping it from connecting to the Internet, and the other is to control what it wants to do, such as teaching it rules and values so it would act in the best interests of humanity. The problem with the former is that Bostrom thought a supersmart machine could probably break free from any bonds we could make. With the latter, he essentially feared that humans might not be smart enough to train a superintelligent AI.
Now Alfonseca and his colleagues suggest it may be impossible to control a superintelligent AI, due to fundamental limits inherent to computing itself. They detailed their findings this month in the Journal of Artificial Intelligence Research.
The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm.
However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures.
“Asimov’s first law of robotics has been proved to be incomputable,” Alfonseca says, “and therefore unfeasible.”
We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice’s theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains.
On the other hand, there’s no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group’s predictions.
First, Alfonseca estimates AI’s moment of truth remains, he says, “At least two centuries in the future.”
Second, he says researchers do not know if so-called artificial general intelligence, also known as strong AI, is theoretically even feasible. “That is, a machine as intelligent as we are in an ample variety of fields,” Alfonseca explains.
Last, Alfonseca says, “We have not proved that superintelligences can never be controlled—only that they can’t always be controlled.”
Although it may not be possible to control a superintelligent artificial general intelligence, it should be possible to control a superintelligent narrow AI—one specialized for certain functions instead of being capable of a broad range of tasks like humans. “We already have superintelligences of this type,” Alfonseca says. “For instance, we have machines that can compute mathematics much faster than we can. This is [narrow] superintelligence, isn’t it?” Continue reading
#437964 How Explainable Artificial Intelligence ...
The field of artificial intelligence has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.
However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.
Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.
Learning From Experience
One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions.
This approach has led to algorithms that have independently learned to play chess at a superhuman level and prove mathematical theorems without any human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.
Through reinforcement learning, AIs are independently learning to solve problems that even humans struggle to figure out. This has got me and many other researchers thinking less about what AI can learn and more about what humans can learn from AI. A computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.
Peering Into the Black Box
Unfortunately, the minds of superhuman AIs are currently out of reach to us humans. AIs make terrible teachers and are what we in the computer science world call “black boxes.”
AI simply spits out solutions without giving reasons for its solutions. Computer scientists have been trying for decades to open this black box, and recent research has shown that many AI algorithms actually do think in ways that are similar to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.
The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube.
The Rubik’s Cube is basically a pathfinding problem: Find a path from point A—a scrambled Rubik’s Cube—to point B—a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis.
My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.
Solutions to the Rubik’s Cube can be broken down into a few generalized steps—the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios.
Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another. The Rubik’s Cube naturally fits into this step-by-step framework, which gives us the opportunity to open the black box of our algorithm more easily. Creating AI algorithms that have this ability could allow people to collaborate with AI and break down a wide variety of complex problems into easy-to-understand steps.
A step-by-step refinement approach can make it easier for humans to understand why AIs do the things they do. Forest Agostinelli, CC BY-ND
Collaboration Leads to Innovation
Our process starts with using one’s own intuition to define a step-by-step plan thought to potentially solve a complex problem. The algorithm then looks at each individual step and gives feedback about which steps are possible, which are impossible and ways the plan could be improved. The human then refines the initial plan using the advice from the AI, and the process repeats until the problem is solved. The hope is that the person and the AI will eventually converge to a kind of mutual understanding.
Currently, our algorithm is able to consider a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. Our team’s next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik’s Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.
People are intuitive in a way unmatched by any AI, but machines are far better in their computational power and algorithmic rigor. This back and forth between man and machine utilizes the strengths from both. I believe this type of collaboration will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Serg Antonov / Unsplash Continue reading