
Tag Archives: network
#441398 “Liquid” Neural Network Adapts on ...
In the realm of artificial intelligence, bigger is supposed to be better. Neural networks with billions of parameters power everyday AI-based tools like ChatGPT and Dall-E, and each new large language model (LLM) edges out its predecessors in size and complexity. Meanwhile, at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a group of researchers have been working on going small.
In recent research, they demonstrated the efficiency of a new kind of very small—20,000 parameter—machine-learning system called a liquid neural network. They showed that drones equipped with these excelled in navigating complex, new environments with precision, even edging out state-of-the art systems. The systems were able to make decisions that led them to a target in previously unexplored forests and city spaces, and they could do it in the presence of added noise and other difficulties.
Neural networks in typical machine-learning systems learn only during the training process. After that, their parameters are fixed. Liquid neural networks, explains Ramin Hasani, one of the CSAIL scientists, are a class of artificial intelligence systems that learn on the job, even after their training. In other words, they utilize “liquid” algorithms that continuously adapt to new information, such as a new environment, just like the brains of living organisms. “They are directly modeled after how neurons and synapses interact in biological brains,” Hasani says. In fact, their network architecture is inspired by the nervous system of living creatures called C. elegans, tiny worms commonly found in the soil.
“We can implement a liquid neural network that can drive a car, on a Raspberry Pi”. —Ramin Hasani, MIT’s CSAIL
The purpose of this experiment wasn’t just the robust autonomous navigation of a drone, Hasani says. “It was about testing the task-understanding capabilities of neural networks when they are deployed in our society as autonomous systems.”
As training data for the neural networks that would control the drone, the researchers used drone footage collected by a human pilot flying toward a target. “You expect the system to have learned to move towards the object,” Hasani says, without having defined what the object is, or provided any label about the environment. “The drone has to infer that the task is this: I want to move towards [the object].”
The team performed a series of experiments to test how learned navigational skills transferred to new, never-seen-before environments. They tested the system in many real-world settings, including in different seasons in a forest, and in an urban setting. The drones underwent range and stress tests, and the targets were rotated, occluded, set in motion, and more. Liquid neural networks were the only ones that could generalize to scenarios that they had never seen, without any fine-tuning, and could perform this task seamlessly and reliably.
The application of liquid neural networks to robotics could lead to more robust autonomous navigation systems, for search and rescue, wildlife monitoring, and deliveries, among other things. Smart mobility, according to Hasani, is going to be crucial as cities get denser, and the small size of these neural nets could be a huge advantage: “We can implement a liquid neural network that can drive a car, on a Raspberry Pi.”
Beyond Drones and MobilityBut the researchers believe liquid neural networks could go even farther, becoming the future of decision making related to any kind of time series data processing, including video and language processing. Because liquid neural networks are sequence data-processing engines, they could predict financial and medical events. By processing vital signs, for example, models can be developed to predict the status of a patient in the ICU.
Over and above their other advantages, liquid neural networks also offer explainability and interpretability. In other words, they open the proverbial black box of the system’s decision-making process. “If I have only 34 neurons [in the drone system], I can literally go and figure out what is the function of each and every element,” says Hasani. That’s something that would be virtually impossible in a large-scale deep neural network. The smaller size of liquid neural nets also massively reduces the computational cost, and therefore the carbon footprints, of machine-learning models.
Hasani and his colleagues are looking for ways to improve liquid neural networks. “This paper covered a very controlled and straightforward kind of reasoning capability, but real-world interactions require more and more sophisticated reasoning problems,” he says. The team would like to design more complex tasks and test liquid neural networks to their limit, while also figuring out why liquid neural networks perform so much better than their competitors in reasoning tests.
Hasani explains liquid neural networks in this video:
Liquid Neural Networks | Ramin Hasani | TEDxMIT
youtu.be Continue reading
#439073 There’s a ‘New’ Nirvana Song Out, ...
One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.
Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.
The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.
The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.
Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.
The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.
To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.
Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”
Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.
The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.
How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”
Image Credit: Edward Xu on Unsplash Continue reading