Tag Archives: deep
#437386 Scary A.I. more intelligent than you
GPT-3 (Generative Pre-trained Transformer 3), is an artificial intelligence language generator that uses deep learning to produce human-like output. The high quality of its text is very difficult to distinguish from a human’s. Many scientists, researchers and engineers (including Stephen … Continue reading
#439110 Robotic Exoskeletons Could One Day Walk ...
Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.
Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.
One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.
Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.
Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.
Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.
According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.
In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”
In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .
Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.
However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading
#439073 There’s a ‘New’ Nirvana Song Out, ...
One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.
Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.
The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.
The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.
Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.
The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.
To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.
Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”
Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.
The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.
How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”
Image Credit: Edward Xu on Unsplash Continue reading
#439032 To Learn To Deal With Uncertainty, This ...
AI is endowing robots, autonomous vehicles and countless of other forms of tech with new abilities and levels of self-sufficiency. Yet these models faithfully “make decisions” based on whatever data is fed into them, which could have dangerous consequences. For instance, if an autonomous car is driving down a highway and the sensor picks up a confusing signal (e.g., a paint smudge that is incorrectly interpreted as a lane marking), this could cause the car to swerve into another lane unnecessarily.
But in the ever-evolving world of AI, researchers are developing new ways to address challenges like this. One group of researchers has devised a new algorithm that allows the AI model to account for uncertain data, which they describe in a study published February 15 in IEEE Transactions on Neural Networks and Learning Systems.
“While we would like robots to work seamlessly in the real world, the real world is full of uncertainty,” says Michael Everett, a post-doctoral associate at MIT who helped develop the new approach. “It's important for a system to be aware of what it knows and what it is unsure about, which has been a major challenge for modern AI.”
His team focused on a type of AI called reinforcement learning (RL), whereby the model tries to learn the “value” of taking each action in a given scenario through trial-and-error. They developed a secondary algorithm, called Certified Adversarial Robustness for deep RL (CARRL), that can be built on top of an existing RL model.
“Our key innovation is that rather than blindly trusting the measurements, as is done today [by AI models], our algorithm CARRL thinks through all possible measurements that could have been made, and makes a decision that considers the worst-case outcome,” explains Everett.
In their study, the researchers tested CARRL across several different tasks, including collision avoidance simulations and Atari pong. For younger readers who may not be familiar with it, Atari pong is a classic computer game whereby an electronic paddle is used to direct a ping pong on the screen. In the test scenario, CARRL helped move the paddle slightly higher or lower to compensate for the possibility that the ball could approach at a slightly different point than what the input data indicated. All the while, CARRL would try to ensure that the ball would make contact with at least some part of paddle.
Gif: MIT Aerospace Controls Laboratory
In a perfect world, the information that an AI model is fed would be accurate all the time and AI model will perform well (left). But in some cases, the AI may be given inaccurate data, causing it to miss its targets (middle). The new algorithm CARRL helps AIs account for uncertainty in its data inputs, yielding a better performance when relying on poor data (right).
Across all test scenarios, the RL model was better at compensating for potential inaccurate or “noisy” data with CARRL, than without CARRL.
But the results also show that, like with humans, too much self-doubt and uncertainty can be unhelpful. In the collision avoidance scenario, for example, indulging in too much uncertainty caused the main moving object in the simulation to avoid both the obstacle and its goal. “There is definitely a limit to how ‘skeptical’ the algorithm can be without becoming overly conservative,” Everett says.
This research was funded by Ford Motor Company, but Everett notes that it could be applicable under many other commercial applications requiring safety-aware AI, including aerospace, healthcare, or manufacturing domains.
“This work is a step toward my vision of creating ‘certifiable learning machines’—systems that can discover how to explore and perform in the real world on their own, while still having safety and robustness guarantees,” says Everett. “We'd like to bring CARRL into robotic hardware while continuing to explore the theoretical challenges at the interface of robotics and AI.” Continue reading