Tag Archives: fail
#433758 DeepMind’s New Research Plan to Make ...
Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.
AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.
That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.
In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.
A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.
Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.
The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.
Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.
Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.
And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.
The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.
The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.
Image Credit: cono0430 / Shutterstock.com Continue reading
#433620 Instilling the Best of Human Values in ...
Now that the era of artificial intelligence is unquestionably upon us, it behooves us to think and work harder to ensure that the AIs we create embody positive human values.
Science fiction is full of AIs that manifest the dark side of humanity, or are indifferent to humans altogether. Such possibilities cannot be ruled out, but nor is there any logical or empirical reason to consider them highly likely. I am among a large group of AI experts who see a strong potential for profoundly positive outcomes in the AI revolution currently underway.
We are facing a future with great uncertainty and tremendous promise, and the best we can do is to confront it with a combination of heart and mind, of common sense and rigorous science. In the realm of AI, what this means is, we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect.
The quest for beneficial AI has many dimensions, including its potential to reduce material scarcity and to help unlock the human capacity for love and compassion.
Reducing Scarcity
A large percentage of difficult issues in human society, many of which spill over into the AI domain, would be palliated significantly if material scarcity became less of a problem. Fortunately, AI has great potential to help here. AI is already increasing efficiency in nearly every industry.
In the next few decades, as nanotech and 3D printing continue to advance, AI-driven design will become a larger factor in the economy. Radical new tools like artificial enzymes built using Christian Schafmeister’s spiroligomer molecules, and designed using quantum physics-savvy AIs, will enable the creation of new materials and medicines.
For amazing advances like the intersection of AI and nanotech to lead toward broadly positive outcomes, however, the economic and political aspects of the AI industry may have to shift from the current status quo.
Currently, most AI development occurs under the aegis of military organizations or large corporations oriented heavily toward advertising and marketing. Put crudely, an awful lot of AI today is about “spying, brainwashing, or killing.” This is not really the ideal situation if we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial.
Also, as the bulk of AI development now occurs in large for-profit organizations bound by law to pursue the maximization of shareholder value, we face a situation where AI tends to exacerbate global wealth inequality and class divisions. This has the potential to lead to various civilization-scale failure modes involving the intersection of geopolitics, AI, cyberterrorism, and so forth. Part of my motivation for founding the decentralized AI project SingularityNET was to create an alternative mode of dissemination and utilization of both narrow AI and AGI—one that operates in a self-organizing way, outside of the direct grip of conventional corporate and governmental structures.
In the end, though, I worry that radical material abundance and novel political and economic structures may fail to create a positive future, unless they are coupled with advances in consciousness and compassion. AGIs have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness—and can effectively pass these values on.
Transmitting Human Values
Brain-computer interfacing is another critical aspect of the quest for creating more positive AIs and more positive humans. As Elon Musk has put it, “If you can’t beat ’em, join’ em.” Joining is more fun than beating anyway. What better way to infuse AIs with human values than to connect them directly to human brains, and let them learn directly from the source (while providing humans with valuable enhancements)?
Millions of people recently heard Elon Musk discuss AI and BCI on the Joe Rogan podcast. Musk’s embrace of brain-computer interfacing is laudable, but he tends to dodge some of the tough issues—for instance, he does not emphasize the trade-off cyborgs will face between retaining human-ness and maximizing intelligence, joy, and creativity. To make this trade-off effectively, the AI portion of the cyborg will need to have a deep sense of human values.
Musk calls humanity the “biological boot loader” for AGI, but to me this colorful metaphor misses a key point—that we can seed the AGI we create with our values as an initial condition. This is one reason why it’s important that the first really powerful AGIs are created by decentralized networks, and not conventional corporate or military organizations. The decentralized software/hardware ecosystem, for all its quirks and flaws, has more potential to lead to human-computer cybernetic collective minds that are reasonable and benevolent.
Algorithmic Love
BCI is still in its infancy, but a more immediate way of connecting people with AIs to infuse both with greater love and compassion is to leverage humanoid robotics technology. Toward this end, I conceived a project called Loving AI, focused on using highly expressive humanoid robots like the Hanson robot Sophia to lead people through meditations and other exercises oriented toward unlocking the human potential for love and compassion. My goals here were to explore the potential of AI and robots to have a positive impact on human consciousness, and to use this application to study and improve the OpenCog and SingularityNET tools used to control Sophia in these interactions.
The Loving AI project has now run two small sets of human trials, both with exciting and positive results. These have been small—dozens rather than hundreds of people—but have definitively proven the point. Put a person in a quiet room with a humanoid robot that can look them in the eye, mirror their facial expressions, recognize some of their emotions, and lead them through simple meditation, listening, and consciousness-oriented exercises…and quite a lot of the time, the result is a more relaxed person who has entered into a shifted state of consciousness, at least for a period of time.
In a certain percentage of cases, the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject—a deep meditative trance state, for instance. In most cases, the result was not so extreme, but statistically the positive effect was quite significant across all cases. Furthermore, a similar effect was found using an avatar simulation of the robot’s face on a tablet screen (together with a webcam for facial expression mirroring and recognition), but not with a purely auditory interaction.
The Loving AI experiments are not only about AI; they are about human-robot and human-avatar interaction, with AI as one significant aspect. The facial interaction with the robot or avatar is pushing “biological buttons” that trigger emotional reactions and prime the mind for changes of consciousness. However, this sort of body-mind interaction is arguably critical to human values and what it means to be human; it’s an important thing for robots and AIs to “get.”
Halting or pausing the advance of AI is not a viable possibility at this stage. Despite the risks, the potential economic and political benefits involved are clear and massive. The convergence of narrow AI toward AGI is also a near inevitability, because there are so many important applications where greater generality of intelligence will lead to greater practical functionality. The challenge is to make the outcome of this great civilization-level adventure as positive as possible.
Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading
#433506 MIT’s New Robot Taught Itself to Pick ...
Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.
This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.
The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.
This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.
Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.
Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.
Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.
But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.
This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.
The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.
This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.
Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.
As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.
This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.
Image Credit: Tom Buehler/CSAIL Continue reading
#432646 How Fukushima Changed Japanese Robotics ...
In March 2011, Japan was hit by a catastrophic earthquake that triggered a terrible tsunami. Thousands were killed and billions of dollars of damage was done in one of the worst disasters of modern times. For a few perilous weeks, though, the eyes of the world were focused on the Fukushima Daiichi nuclear power plant. Its safety systems were unable to cope with the tsunami damage, and there were widespread fears of another catastrophic meltdown that could spread radiation over several countries, like the Chernobyl disaster in the 1980s. A heroic effort that included dumping seawater into the reactor core prevented an even bigger catastrophe. As it is, a hundred thousand people are still evacuated from the area, and it will likely take many years and hundreds of billions of dollars before the region is safe.
Because radiation is so dangerous to humans, the natural solution to the Fukushima disaster was to send in robots to monitor levels of radiation and attempt to begin the clean-up process. The techno-optimists in Japan had discovered a challenge, deep in the heart of that reactor core, that even their optimism could not solve. The radiation fried the circuits of the robots that were sent in, even those specifically designed and built to deal with the Fukushima catastrophe. The power plant slowly became a vast robot graveyard. While some robots initially saw success in measuring radiation levels around the plant—and, recently, a robot was able to identify the melted uranium fuel at the heart of the disaster—hopes of them playing a substantial role in the clean-up are starting to diminish.
In Tokyo’s neon Shibuya district, it can sometimes seem like it’s brighter at night than it is during the daytime. In karaoke booths on the twelfth floor—because everything is on the twelfth floor—overlooking the brightly-lit streets, businessmen unwind by blasting out pop hits. It can feel like the most artificial place on Earth; your senses are dazzled by the futuristic techno-optimism. Stock footage of the area has become symbolic of futurism and modernity.
Japan has had a reputation for being a nation of futurists for a long time. We’ve already described how tech giant Softbank, headed by visionary founder Masayoshi Son, is investing billions in a technological future, including plans for the world’s largest solar farm.
When Google sold pioneering robotics company Boston Dynamics in 2017, Softbank added it to their portfolio, alongside the famous Nao and Pepper robots. Some may think that Son is taking a gamble in pursuing a robotics project even Google couldn’t succeed in, but this is a man who lost nearly everything in the dot-com crash of 2000. The fact that even this reversal didn’t dent his optimism and faith in technology is telling. But how long can it last?
The failure of Japan’s robots to deal with the immense challenge of Fukushima has sparked something of a crisis of conscience within the industry. Disaster response is an obvious stepping-stone technology for robots. Initially, producing a humanoid robot will be very costly, and the robot will be less capable than a human; building a robot to wait tables might not be particularly economical yet. Building a robot to do jobs that are too dangerous for humans is far more viable. Yet, at Fukushima, in one of the most advanced nations in the world, many of the robots weren’t up to the task.
Nowhere was this crisis more felt than Honda; the company had developed ASIMO, which stunned the world in 2000 and continues to fascinate as an iconic humanoid robot. Despite all this technological advancement, however, Honda knew that ASIMO was still too unreliable for the real world.
It was Fukushima that triggered a sea-change in Honda’s approach to robotics. Two years after the disaster, there were rumblings that Honda was developing a disaster robot, and in October 2017, the prototype was revealed to the public for the first time. It’s not yet ready for deployment in disaster zones, however. Interestingly, the creators chose not to give it dexterous hands but instead to assume that remotely-operated tools fitted to the robot would be a better solution for the range of circumstances it might encounter.
This shift in focus for humanoid robots away from entertainment and amusement like ASIMO, and towards being practically useful, has been mirrored across the world.
In 2015, also inspired by the Fukushima disaster and the lack of disaster-ready robots, the DARPA Robotics Challenge tested humanoid robots with a range of tasks that might be needed in emergency response, such as driving cars, opening doors, and climbing stairs. The Terminator-like ATLAS robot from Boston Dynamics, alongside Korean robot HUBO, took many of the plaudits, and CHIMP also put in an impressive display by being able to right itself after falling.
Yet the DARPA Robotics Challenge showed us just how far the robots are from truly being as useful as we’d like, or maybe even as we would imagine. Many robots took hours to complete the tasks, which were highly idealized to suit them. Climbing stairs proved a particular challenge. Those who watched were more likely to see a robot that had fallen over, struggling to get up, rather than heroic superbots striding in to save the day. The “striding” proved a particular problem, with the fastest robot HUBO managing this by resorting to wheels in its knees when the legs weren’t necessary.
Fukushima may have brought a sea-change over futuristic Japan, but before robots will really begin to enter our everyday lives, they will need to prove their worth. In the interim, aerial drone robots designed to examine infrastructure damage after disasters may well see earlier deployment and more success.
It’s a considerable challenge.
Building a humanoid robot is expensive; if these multi-million-dollar machines can’t help in a crisis, people may begin to question the worth of investing in them in the first place (unless your aim is just to make viral videos). This could lead to a further crisis of confidence among the Japanese, who are starting to rely on humanoid robotics as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.
But if they continue to fail when put to the test, that will raise serious concerns. In Tokyo’s Akihabara district, you can see all kinds of flash robotic toys for sale in the neon-lit superstores, and dancing, acting robots like Robothespian can entertain crowds all over the world. But if we want these machines to be anything more than toys—partners, helpers, even saviors—more work needs to be done.
At the same time, those who participated in the DARPA Robotics Challenge in 2015 won’t be too concerned if people were underwhelmed by the performance of their disaster relief robots. Back in 2004, nearly every participant in the DARPA Grand Challenge crashed, caught fire, or failed on the starting line. To an outside observer, the whole thing would have seemed like an unmitigated disaster, and a pointless investment. What was the task in 2004? Developing a self-driving car. A lot can change in a decade.
Image Credit: MARCUSZ2527 / Shutterstock.com Continue reading