Tag Archives: goal
#433758 DeepMind’s New Research Plan to Make ...
Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.
AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.
That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.
In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.
A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.
Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.
The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.
Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.
Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.
And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.
The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.
The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.
Image Credit: cono0430 / Shutterstock.com Continue reading
#433634 This Robotic Skin Makes Inanimate ...
In Goethe’s poem “The Sorcerer’s Apprentice,” made world-famous by its adaptation in Disney’s Fantasia, a lazy apprentice, left to fetch water, uses magic to bewitch a broom into performing his chores for him. Now, new research from Yale has opened up the possibility of being able to animate—and automate—household objects by fitting them with a robotic skin.
Yale’s Soft Robotics lab, the Faboratory, is led by Professor Rebecca Kramer-Bottiglio, and has long investigated the possibilities associated with new kinds of manufacturing. While the typical image of a robot is hard, cold steel and rigid movements, soft robotics aims to create something more flexible and versatile. After all, the human body is made up of soft, flexible surfaces, and the world is designed for us. Soft, deformable robots could change shape to adapt to different tasks.
When designing a robot, key components are the robot’s sensors, which allow it to perceive its environment, and its actuators, the electrical or pneumatic motors that allow the robot to move and interact with its environment.
Consider your hand, which has temperature and pressure sensors, but also muscles as actuators. The omni-skins, as the Science Robotics paper dubs them, combine sensors and actuators, embedding them into an elastic sheet. The robotic skins are moved by pneumatic actuators or memory alloy that can bounce back into shape. If this is then wrapped around a soft, deformable object, moving the skin with the actuators can allow the object to crawl along a surface.
The key to the design here is flexibility: rather than adding chips, sensors, and motors into every household object to turn them into individual automatons, the same skin can be used for many purposes. “We can take the skins and wrap them around one object to perform a task—locomotion, for example—and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” said Kramer-Bottiglio. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”
The task is then to dream up applications for the omni-skins. Initially, you might imagine demanding a stuffed toy to fetch the remote control for you, or animating a sponge to wipe down kitchen surfaces—but this is just the beginning. The scientists attached the skins to a soft tube and camera, creating a worm-like robot that could compress itself and crawl into small spaces for rescue missions. The same skins could then be worn by a person to sense their posture. One could easily imagine this being adapted into a soft exoskeleton for medical or industrial purposes: for example, helping with rehabilitation after an accident or injury.
The initial motivating factor for creating the robots was in an environment where space and weight are at a premium, and humans are forced to improvise with whatever’s at hand: outer space. Kramer-Bottoglio originally began the work after NASA called out for soft robotics systems for use by astronauts. Instead of wasting valuable rocket payload by sending up a heavy metal droid like ATLAS to fetch items or perform repairs, soft robotic skins with modular sensors could be adapted for a range of different uses spontaneously.
By reassembling components in the soft robotic skin, a crumpled ball of paper could provide the chassis for a robot that performs repairs on the spaceship, or explores the lunar surface. The dynamic compression provided by the robotic skin could be used for g-suits to protect astronauts when they rapidly accelerate or decelerate.
“One of the main things I considered was the importance of multi-functionality, especially for deep space exploration where the environment is unpredictable. The question is: How do you prepare for the unknown unknowns? … Given the design-on-the-fly nature of this approach, it’s unlikely that a robot created using robotic skins will perform any one task optimally,” Kramer-Bottiglio said. “However, the goal is not optimization, but rather diversity of applications.”
There are still problems to resolve. Many of the videos of the skins indicate that they can rely on an external power supply. Creating new, smaller batteries that can power wearable devices has been a focus of cutting-edge materials science research for some time. Much of the lab’s expertise is in creating flexible, stretchable electronics that can be deformed by the actuators without breaking the circuitry. In the future, the team hopes to work on streamlining the production process; if the components could be 3D printed, then the skins could be created when needed.
In addition, robotic hardware that’s capable of performing an impressive range of precise motions is quite an advanced technology. The software to control those robots, and enable them to perform a variety of tasks, is quite another challenge. With soft robots, it can become even more complex to design that control software, because the body itself can change shape and deform as the robot moves. The same set of programmed motions, then, can produce different results depending on the environment.
“Let’s say I have a soft robot with four legs that crawls along the ground, and I make it walk up a hard slope,” Dr. David Howard, who works on robotics at CSIRO in Australia, explained to ABC.
“If I make that slope out of gravel and I give it the same control commands, the actual body is going to deform in a different way, and I’m not necessarily going to know what that is.”
Despite these and other challenges, research like that at the Faboratory still hopes to redefine how we think of robots and robotics. Instead of a robot that imitates a human and manipulates objects, the objects themselves will become programmable matter, capable of moving autonomously and carrying out a range of tasks. Futurists speculate about a world where most objects are automated to some degree and can assemble and repair themselves, or are even built entirely of tiny robots.
The tale of the Sorcerer’s Apprentice was first written in 1797, at the dawn of the industrial revolution, over a century before the word “robot” was even coined. Yet more and more roboticists aim to prove Arthur C Clarke’s maxim: any sufficiently advanced technology is indistinguishable from magic.
Image Credit: Joran Booth, The Faboratory Continue reading
#433506 MIT’s New Robot Taught Itself to Pick ...
Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.
This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.
The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.
This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.
Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.
Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.
Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.
But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.
This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.
The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.
This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.
Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.
As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.
This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.
Image Credit: Tom Buehler/CSAIL Continue reading