Tag Archives: updates
Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.
AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.
That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.
In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.
A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.
Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.
The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.
Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.
Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.
And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.
The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.
The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.
Image Credit: cono0430 / Shutterstock.com Continue reading
Photo credit Technology sets us apart and puts us at the forefront, and one industry that is definitely embracing automation and robotics readily is the motor industry. Technology has touched every aspect of motoring – from updates to the manufacturing process that produce much safer and more reliable vehicles to black box diagnostics being used …
The post How Technology Disrupts and Drives The Automotive Industry appeared first on TFOT. Continue reading
Mayfield Robotics improves their home robot Kuri, adding track wheels, structural updates and “Kuri Vision,” an autonomous home video program Continue reading
Have Scientists Discovered the Cure for Potholes?Angela Chen | The Verge"Self-healing asphalt has been tested on 12 different roads in the Netherlands, and one of these has been functioning and open to the public since 2010. All are still in perfect condition, but Schlangen notes that even normal asphalt roads are fine for about seven to 10 years and that it’s in upcoming years that we’ll really start to see the difference. He estimates that the overall cost of the material would be 25 percent more expensive than normal asphalt, but it could double the life of the road."
The Little Robot That Taught the Big Robot a Thing or TwoMatt Simon | WIRED"New research out today from the MIT Computer Science and Artificial Intelligence Laboratory takes a big step toward making such seamless transfers of knowledge a reality. It all begins with a little robot named Optimus and its friend, the famous 6-foot-tall humanoid Atlas."
A Cheap, Simple Way to Make Anything a Touch PadRachel Metz | MIT Technology Review"Researchers at Carnegie Mellon University say they’ve come up with a way to make many kinds of devices responsive to touch just by spraying them with conductive paint, adding electrodes, and computing where you press on them…Called Electrick, it can be used with materials like plastic, Jell-O, and silicone, and it could make touch tracking a lot cheaper, too, since it relies on already available paint and parts, Zhang says."
A New 3D Printing Technology Uses Electricity to Create Stronger Objects for ManufacturingBrian Heater | TechCrunch"FuseBox’s thrust is simultaneously dead simple and entirely complex, but at the most elementary level, it utilizes heat and electricity to increase the temperature of the material before and after each level is deposited. This serves to strengthen the body of the printed product where it’s traditionally weakest during the FDM (fused deposition modeling) print – the same layer-by-layer technology employed by MakerBot and the majority of desktop 3D printers."
What Is America's Secret Space Shuttle For?Marina Koren | The Atlantic"The news that the military had a space shuttle quietly orbiting Earth for more than 700 days came as a surprise to some. Why didn’t we know about this thing, the reaction seemed to go. The reaction illustrated the distinct line between the country’s civilian and military activities in space, and how much the general public knows about each."
Image source: Shutterstock Continue reading