Category Archives: Human Robots
#439820 How Musicologists and Scientists Used AI ...
When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.
Ever since then, Beethoven fans and musicologists have puzzled and lamented over what could have been. His notes teased at some magnificent reward, albeit one that seemed forever out of reach.
Now, thanks to the work of a team of music historians, musicologists, composers and computer scientists, Beethoven’s vision will come to life.
I presided over the artificial intelligence side of the project, leading a group of scientists at the creative AI startup Playform AI that taught a machine both Beethoven’s entire body of work and his creative process.
A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany—the culmination of a two-year-plus effort.
Past Attempts Hit a Wall
Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo.
Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.”
But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down.
A page of Beethoven’s notes for his planned 10th Symphony. Image Credit: Beethoven House Museum, CC BY-SA
There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision.
Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement.
Assembling the Team
In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven.
The challenge seemed daunting. To pull it off, AI would need to do something it had never done before. But I said I would give it a shot.
Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained.
The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach.
The Project Takes Shape
In June 2019, the group gathered for a two-day workshop at Harvard’s music library. In a large room with a piano, a blackboard and a stack of Beethoven’s sketchbooks spanning most of his known works, we talked about how fragments could be turned into a complete piece of music and how AI could help solve this puzzle, while still remaining faithful to Beethoven’s process and vision.
The music experts in the room were eager to learn more about the sort of music AI had created in the past. I told them how AI had successfully generated music in the style of Bach. However, this was only a harmonization of an inputted melody that sounded like Bach. It didn’t come close to what we needed to do: construct an entire symphony from a handful of phrases.
Meanwhile, the scientists in the room—myself included—wanted to learn about what sort of materials were available, and how the experts envisioned using them to complete the symphony.
The task at hand eventually crystallized. We would need to use notes and completed compositions from Beethoven’s entire body of work—along with the available sketches from the 10th Symphony—to create something that Beethoven himself might have written.
This was a tremendous challenge. We didn’t have a machine that we could feed sketches to, push a button and have it spit out a symphony. Most AI available at the time couldn’t continue an uncompleted piece of music beyond a few additional seconds.
We would need to push the boundaries of what creative AI could do by teaching the machine Beethoven’s creative process—how he would take a few bars of music and painstakingly develop them into stirring symphonies, quartets, and sonatas.
Piecing Together Beethoven’s Creative Process
As the project progressed, the human side and the machine side of the collaboration evolved. Werzowa, Gotham, Levin, and Röder deciphered and transcribed the sketches from the 10th Symphony, trying to understand Beethoven’s intentions. Using his completed symphonies as a template, they attempted to piece together the puzzle of where the fragments of sketches should go—which movement, which part of the movement.
They had to make decisions, like determining whether a sketch indicated the starting point of a scherzo, which is a very lively part of the symphony, typically in the third movement. Or they might determine that a line of music was likely the basis of a fugue, which is a melody created by interweaving parts that all echo a central theme.
The AI side of the project—my side—found itself grappling with a range of challenging tasks.
First, and most fundamentally, we needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.
Next, because the continuation of a phrase also needs to follow a certain musical form, whether it’s a scherzo, trio, or fugue, the AI needed to learn Beethoven’s process for developing these forms.
The to-do list grew: We had to teach the AI how to take a melodic line and harmonize it. The AI needed to learn how to bridge two sections of music together. And we realized the AI had to be able to compose a coda, which is a segment that brings a section of a piece of music to its conclusion.
Finally, once we had a full composition, the AI was going to have to figure out how to orchestrate it, which involves assigning different instruments for different parts.
And it had to pull off these tasks in the way Beethoven might do so.
Passing the First Big Test
In November 2019, the team met in person again—this time, in Bonn, at the Beethoven House Museum, where the composer was born and raised.
This meeting was the litmus test for determining whether AI could complete this project. We printed musical scores that had been developed by AI and built off the sketches from Beethoven’s 10th. A pianist performed in a small concert hall in the museum before a group of journalists, music scholars, and Beethoven experts.
Journalists and musicians gather to hear a pianist perform parts of Beethoven’s 10th Symphony. Image Credit: Ahmed Elgammal, CC BY-SA
We challenged the audience to determine where Beethoven’s phrases ended and where the AI extrapolation began. They couldn’t.
A few days later, one of these AI-generated scores was played by a string quartet in a news conference. Only those who intimately knew Beethoven’s sketches for the 10th Symphony could determine when the AI-generated parts came in.
The success of these tests told us we were on the right track. But these were just a couple of minutes of music. There was still much more work to do.
Ready for the World
At every point, Beethoven’s genius loomed, challenging us to do better. As the project evolved, the AI did as well. Over the ensuing 18 months, we constructed and orchestrated two entire movements of more than 20 minutes apiece.
We anticipate some pushback to this work—those who will say that the arts should be off-limits from AI, and that AI has no business trying to replicate the human creative process. Yet when it comes to the arts, I see AI not as a replacement, but as a tool—one that opens doors for artists to express themselves in new ways.
This project would not have been possible without the expertise of human historians and musicians. It took an immense amount of work—and, yes, creative thinking—to accomplish this goal.
At one point, one of the music experts on the team said that the AI reminded him of an eager music student who practices every day, learns, and becomes better and better.
Now that student, having taken the baton from Beethoven, is ready to present the 10th Symphony to the world.
The piece above is a selection from Beethoven’s 10th Symphony. YouTube/Modern Recordings, CC BY-SA 3.38 MB (download)
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Circe Denyer Continue reading
#439816 This Bipedal Drone Robot Can Walk, Fly, ...
Most animals are limited to either walking, flying, or swimming, with a handful of lucky species whose physiology allows them to cross over. A new robot took inspiration from them, and can fly like a bird just as well as it can walk like a (weirdly awkward, metallic, tiny) person. It also happens to be able to skateboard and slackline, two skills most humans will never pick up.
Described in a paper published this week in Science Robotics, the robot’s name is Leo, which is short for Leonardo, which is short for LEgs ONboARD drOne. The name makes it sound like a drone with legs, but it has a somewhat humanoid shape, with multi-joint legs, propeller thrusters that look like arms, a “body” that contains its motors and electronics, and a dome-shaped protection helmet.
Leo was built by a team at Caltech, and they were particularly interested in how the robot would transition between walking and flying. The team notes that they studied the way birds use their legs to generate thrust when they take off, and applied similar principles to the robot. In a video that shows Leo approaching a staircase, taking off, and gliding over the stairs to land near the bottom, the robot’s motions are seamlessly graceful.
“There is a similarity between how a human wearing a jet suit controls their legs and feet when landing or taking off and how LEO uses synchronized control of distributed propeller-based thrusters and leg joints,” said Soon-Jo Chung, one of the paper’s authors a professor at Caltech. “We wanted to study the interface of walking and flying from the dynamics and control standpoint.”
Leo walks at a speed of 20 centimeters (7.87 inches) per second, but can move faster by mixing in some flying with the walking. How wide our steps are, where we place our feet, and where our torsos are in relation to our legs all help us balance when we walk. The robot uses its propellers to help it balance, while its leg actuators move it forward.
To teach the robot to slackline—which is much harder than walking on a balance beam—the team overrode its feet contact sensors with a fixed virtual foot contact centered just underneath it, because the sensors weren’t able to detect the line. The propellers played a big part as well, helping keep Leo upright and balanced.
For the robot to ride a skateboard, the team broke the process down into two distinct components: controlling the steering angle and controlling the skateboard’s acceleration and deceleration. Placing Leo’s legs in specific spots on the board made it tilt to enable steering, and forward acceleration was achieved by moving the bot’s center of mass backward while pitching the body forward at the same time.
So besides being cool (and a little creepy), what’s the goal of developing a robot like Leo? The paper authors see robots like Leo enabling a range of robotic missions that couldn’t be carried out by ground or aerial robots.
“Perhaps the most well-suited applications for Leo would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and call for a substitution by robotic workers,” the paper’s authors said. Examples could include high-voltage line inspection, painting tall bridges or other high-up surfaces, inspecting building roofs or oil refinery pipes, or landing sensitive equipment on an extraterrestrial object.
Next up for Leo is an upgrade to its performance via a more rigid leg design, which will help support the robot’s weight and increase the thrust force of its propellers. The team also wants to make Leo more autonomous, and plans to add a drone landing control algorithm to its software, ultimately aiming for the robot to be able to decide where and when to walk versus fly.
Leo hasn’t quite achieved the wow factor of Boston Dynamics’ dancing robots (or its Atlas that can do parkour), but it’s on its way.
Image Credit: Caltech Center for Autonomous Systems and Technologies/Science Robotics Continue reading
#439815 How to Prepare Your Workforce for AI ...
Image by John Conde from Pixabay Despite a myriad of articles, research papers, and conversations regarding artificial intelligence and machine learning development, the predictions about its impact range significantly. The absolute majority agrees that AI is one of the keys to digital transformation and that it will change the business and job market forever. However, it’s …
The post How to Prepare Your Workforce for AI Disruption? appeared first on TFOT. Continue reading
#439804 How Quantum Computers Can Be Used to ...
Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we’ve experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world.
Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers.
More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI.
Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design.
“Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.”
At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems.
This was the basis for Google’s groundbreaking display of “quantum supremacy” in 2019. The company’s researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world’s fastest supercomputer.
This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place.
This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered.
Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using.
Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium.
It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it’s possible to manipulate it using the same control technologies.
The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more.
But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today’s rudimentary quantum computers can help design future devices, they’ve opened the door to a virtuous cycle that could significantly speed innovation in this field.
Image Credit: Pete Linforth from Pixabay Continue reading