Tag Archives: robotics
#431413 ‘Bots battle for the ball, and the ...
The World Robotics Olympiad, being held in Costa Rica this weekend, shows human athletes still have little to worry about: sweat and glory do not compute well when relegated to faceless automatons. Continue reading
#431403 Wizards of ROS: Willow Garage and the ...
How a small band of Silicon Valley engineers started a global robotics revolution Continue reading
#431392 What AI Can Now Do Is Remarkable—But ...
Major websites all over the world use a system called CAPTCHA to verify that someone is indeed a human and not a bot when entering data or signing into an account. CAPTCHA stands for the “Completely Automated Public Turing test to tell Computers and Humans Apart.” The squiggly letters and numbers, often posted against photographs or textured backgrounds, have been a good way to foil hackers. They are annoying but effective.
The days of CAPTCHA as a viable line of defense may, however, be numbered.
Researchers at Vicarious, a Californian artificial intelligence firm funded by Amazon founder Jeff Bezos and Facebook’s Mark Zuckerberg, have just published a paper documenting how they were able to defeat CAPTCHA using new artificial intelligence techniques. Whereas today’s most advanced artificial intelligence (AI) technologies use neural networks that require massive amounts of data to learn from, sometimes millions of examples, the researchers said their system needed just five training steps to crack Google’s reCAPTCHA technology. With this, they achieved a 67 percent success rate per character—reasonably close to the human accuracy rate of 87 percent. In answering PayPal and Yahoo CAPTCHAs, the system achieved an accuracy rate of greater than 50 percent.
The CAPTCHA breakthrough came hard on the heels of another major milestone from Google’s DeepMind team, the people who built the world’s best Go-playing system. DeepMind built a new artificial-intelligence system called AlphaGo Zero that taught itself to play the game at a world-beating level with minimal training data, mainly using trial and error—in a fashion similar to how humans learn.
Both playing Go and deciphering CAPTCHAs are clear examples of what we call narrow AI, which is different from artificial general intelligence (AGI)—the stuff of science fiction. Remember R2-D2 of Star Wars, Ava from Ex Machina, and Samantha from Her? They could do many things and learned everything they needed on their own.
Narrow AI technologies are systems that can only perform one specific type of task. For example, if you asked AlphaGo Zero to learn to play Monopoly, it could not, even though that is a far less sophisticated game than Go. If you asked the CAPTCHA cracker to learn to understand a spoken phrase, it would not even know where to start.
To date, though, even narrow AI has been difficult to build and perfect. To perform very elementary tasks such as determining whether an image is of a cat or a dog, the system requires the development of a model that details exactly what is being analyzed and massive amounts of data with labeled examples of both. The examples are used to train the AI systems, which are modeled on the neural networks in the brain, in which the connections between layers of neurons are adjusted based on what is observed. To put it simply, you tell an AI system exactly what to learn, and the more data you give it, the more accurate it becomes.
The methods that Vicarious and Google used were different; they allowed the systems to learn on their own, albeit in a narrow field. By making their own assumptions about what the training model should be and trying different permutations until they got the right results, they were able to teach themselves how to read the letters in a CAPTCHA or to play a game.
This blurs the line between narrow AI and AGI and has broader implications in robotics and virtually any other field in which machine learning in complex environments may be relevant.
Beyond visual recognition, the Vicarious breakthrough and AlphaGo Zero success are encouraging scientists to think about how AIs can learn to do things from scratch. And this brings us one step closer to coexisting with classes of AIs and robots that can learn to perform new tasks that are slight variants on their previous tasks—and ultimately the AGI of science fiction.
So R2-D2 may be here sooner than we expected.
This article was originally published by The Washington Post. Read the original article here.
Image Credit: Zapp2Photo / Shutterstock.com Continue reading