Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439820 How Musicologists and Scientists Used AI ...

When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.

Ever since then, Beethoven fans and musicologists have puzzled and lamented over what could have been. His notes teased at some magnificent reward, albeit one that seemed forever out of reach.

Now, thanks to the work of a team of music historians, musicologists, composers and computer scientists, Beethoven’s vision will come to life.

I presided over the artificial intelligence side of the project, leading a group of scientists at the creative AI startup Playform AI that taught a machine both Beethoven’s entire body of work and his creative process.

A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany—the culmination of a two-year-plus effort.

Past Attempts Hit a Wall
Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo.

Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.”

But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down.

A page of Beethoven’s notes for his planned 10th Symphony. Image Credit: Beethoven House Museum, CC BY-SA

There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision.

Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement.

Assembling the Team
In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven.

The challenge seemed daunting. To pull it off, AI would need to do something it had never done before. But I said I would give it a shot.

Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained.

The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach.

The Project Takes Shape
In June 2019, the group gathered for a two-day workshop at Harvard’s music library. In a large room with a piano, a blackboard and a stack of Beethoven’s sketchbooks spanning most of his known works, we talked about how fragments could be turned into a complete piece of music and how AI could help solve this puzzle, while still remaining faithful to Beethoven’s process and vision.

The music experts in the room were eager to learn more about the sort of music AI had created in the past. I told them how AI had successfully generated music in the style of Bach. However, this was only a harmonization of an inputted melody that sounded like Bach. It didn’t come close to what we needed to do: construct an entire symphony from a handful of phrases.

Meanwhile, the scientists in the room—myself included—wanted to learn about what sort of materials were available, and how the experts envisioned using them to complete the symphony.

The task at hand eventually crystallized. We would need to use notes and completed compositions from Beethoven’s entire body of work—along with the available sketches from the 10th Symphony—to create something that Beethoven himself might have written.

This was a tremendous challenge. We didn’t have a machine that we could feed sketches to, push a button and have it spit out a symphony. Most AI available at the time couldn’t continue an uncompleted piece of music beyond a few additional seconds.

We would need to push the boundaries of what creative AI could do by teaching the machine Beethoven’s creative process—how he would take a few bars of music and painstakingly develop them into stirring symphonies, quartets, and sonatas.

Piecing Together Beethoven’s Creative Process
As the project progressed, the human side and the machine side of the collaboration evolved. Werzowa, Gotham, Levin, and Röder deciphered and transcribed the sketches from the 10th Symphony, trying to understand Beethoven’s intentions. Using his completed symphonies as a template, they attempted to piece together the puzzle of where the fragments of sketches should go—which movement, which part of the movement.

They had to make decisions, like determining whether a sketch indicated the starting point of a scherzo, which is a very lively part of the symphony, typically in the third movement. Or they might determine that a line of music was likely the basis of a fugue, which is a melody created by interweaving parts that all echo a central theme.

The AI side of the project—my side—found itself grappling with a range of challenging tasks.

First, and most fundamentally, we needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.

Next, because the continuation of a phrase also needs to follow a certain musical form, whether it’s a scherzo, trio, or fugue, the AI needed to learn Beethoven’s process for developing these forms.

The to-do list grew: We had to teach the AI how to take a melodic line and harmonize it. The AI needed to learn how to bridge two sections of music together. And we realized the AI had to be able to compose a coda, which is a segment that brings a section of a piece of music to its conclusion.

Finally, once we had a full composition, the AI was going to have to figure out how to orchestrate it, which involves assigning different instruments for different parts.
And it had to pull off these tasks in the way Beethoven might do so.

Passing the First Big Test
In November 2019, the team met in person again—this time, in Bonn, at the Beethoven House Museum, where the composer was born and raised.

This meeting was the litmus test for determining whether AI could complete this project. We printed musical scores that had been developed by AI and built off the sketches from Beethoven’s 10th. A pianist performed in a small concert hall in the museum before a group of journalists, music scholars, and Beethoven experts.

Journalists and musicians gather to hear a pianist perform parts of Beethoven’s 10th Symphony. Image Credit: Ahmed Elgammal, CC BY-SA

We challenged the audience to determine where Beethoven’s phrases ended and where the AI extrapolation began. They couldn’t.

A few days later, one of these AI-generated scores was played by a string quartet in a news conference. Only those who intimately knew Beethoven’s sketches for the 10th Symphony could determine when the AI-generated parts came in.

The success of these tests told us we were on the right track. But these were just a couple of minutes of music. There was still much more work to do.

Ready for the World
At every point, Beethoven’s genius loomed, challenging us to do better. As the project evolved, the AI did as well. Over the ensuing 18 months, we constructed and orchestrated two entire movements of more than 20 minutes apiece.

We anticipate some pushback to this work—those who will say that the arts should be off-limits from AI, and that AI has no business trying to replicate the human creative process. Yet when it comes to the arts, I see AI not as a replacement, but as a tool—one that opens doors for artists to express themselves in new ways.

This project would not have been possible without the expertise of human historians and musicians. It took an immense amount of work—and, yes, creative thinking—to accomplish this goal.

At one point, one of the music experts on the team said that the AI reminded him of an eager music student who practices every day, learns, and becomes better and better.

Now that student, having taken the baton from Beethoven, is ready to present the 10th Symphony to the world.

The piece above is a selection from Beethoven’s 10th Symphony. YouTube/Modern Recordings, CC BY-SA 3.38 MB (download)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Circe Denyer Continue reading

Posted in Human Robots

#439816 This Bipedal Drone Robot Can Walk, Fly, ...

Most animals are limited to either walking, flying, or swimming, with a handful of lucky species whose physiology allows them to cross over. A new robot took inspiration from them, and can fly like a bird just as well as it can walk like a (weirdly awkward, metallic, tiny) person. It also happens to be able to skateboard and slackline, two skills most humans will never pick up.

Described in a paper published this week in Science Robotics, the robot’s name is Leo, which is short for Leonardo, which is short for LEgs ONboARD drOne. The name makes it sound like a drone with legs, but it has a somewhat humanoid shape, with multi-joint legs, propeller thrusters that look like arms, a “body” that contains its motors and electronics, and a dome-shaped protection helmet.

Leo was built by a team at Caltech, and they were particularly interested in how the robot would transition between walking and flying. The team notes that they studied the way birds use their legs to generate thrust when they take off, and applied similar principles to the robot. In a video that shows Leo approaching a staircase, taking off, and gliding over the stairs to land near the bottom, the robot’s motions are seamlessly graceful.

“There is a similarity between how a human wearing a jet suit controls their legs and feet when landing or taking off and how LEO uses synchronized control of distributed propeller-based thrusters and leg joints,” said Soon-Jo Chung, one of the paper’s authors a professor at Caltech. “We wanted to study the interface of walking and flying from the dynamics and control standpoint.”

Leo walks at a speed of 20 centimeters (7.87 inches) per second, but can move faster by mixing in some flying with the walking. How wide our steps are, where we place our feet, and where our torsos are in relation to our legs all help us balance when we walk. The robot uses its propellers to help it balance, while its leg actuators move it forward.

To teach the robot to slackline—which is much harder than walking on a balance beam—the team overrode its feet contact sensors with a fixed virtual foot contact centered just underneath it, because the sensors weren’t able to detect the line. The propellers played a big part as well, helping keep Leo upright and balanced.

For the robot to ride a skateboard, the team broke the process down into two distinct components: controlling the steering angle and controlling the skateboard’s acceleration and deceleration. Placing Leo’s legs in specific spots on the board made it tilt to enable steering, and forward acceleration was achieved by moving the bot’s center of mass backward while pitching the body forward at the same time.

So besides being cool (and a little creepy), what’s the goal of developing a robot like Leo? The paper authors see robots like Leo enabling a range of robotic missions that couldn’t be carried out by ground or aerial robots.

“Perhaps the most well-suited applications for Leo would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and call for a substitution by robotic workers,” the paper’s authors said. Examples could include high-voltage line inspection, painting tall bridges or other high-up surfaces, inspecting building roofs or oil refinery pipes, or landing sensitive equipment on an extraterrestrial object.

Next up for Leo is an upgrade to its performance via a more rigid leg design, which will help support the robot’s weight and increase the thrust force of its propellers. The team also wants to make Leo more autonomous, and plans to add a drone landing control algorithm to its software, ultimately aiming for the robot to be able to decide where and when to walk versus fly.

Leo hasn’t quite achieved the wow factor of Boston Dynamics’ dancing robots (or its Atlas that can do parkour), but it’s on its way.

Image Credit: Caltech Center for Autonomous Systems and Technologies/Science Robotics Continue reading

Posted in Human Robots

#439815 How to Prepare Your Workforce for AI ...

Image by John Conde from Pixabay Despite a myriad of articles, research papers, and conversations regarding artificial intelligence and machine learning development, the predictions about its impact range significantly. The absolute majority agrees that AI is one of the keys to digital transformation and that it will change the business and job market forever. However, it’s …

The post How to Prepare Your Workforce for AI Disruption? appeared first on TFOT. Continue reading

Posted in Human Robots

#439808 Caltech’s LEO Flying Biped Can ...

Back in February of 2019, we wrote about a sort of humanoid robot thing (?) under development at Caltech, called Leonardo. LEO combines lightweight bipedal legs with torso-mounted thrusters powerful enough to lift the entire robot off the ground, which can handily take care of on-ground dynamic balancing while also enabling some slick aerial maneuvers.

In a paper published today in Science Robotics, the Caltech researchers get us caught up on what they've been doing with LEO for the past several years, and it can now skateboard, slackline, and make dainty airborne hops with exceptionally elegant landings.

Those heels! Seems like a real sponsorship opportunity, right?

The version of LEO you see here is significantly different from the version we first met two years ago. Most importantly, while “Leonardo” used to stand for “LEg ON Aerial Robotic DrOne,” it now stands for “LEgs ONboARD drOne,” which may be the first even moderately successful re-backronym I've ever seen. Otherwise, the robot has been completely redesigned, with the version you see here sharing zero parts in hardware or software with the 2019 version. We're told that the old robot, and I'm quoting from the researchers here, “unfortunately never worked,” in the sense that it was much more limited than the new one—the old design had promise, but it couldn't really walk and the thrusters were only useful for jumping augmentation as opposed to sustained flight.

To enable the new LEO to fly, it now has much lighter weight legs driven by lightweight servo motors. The thrusters have been changed from two coaxial propellers to four tilted propellers, enabling attitude control in all directions. And everything is now onboard, including computers, batteries, and a new software stack. I particularly love how LEO lands into a walking gait so gently and elegantly. Professor Soon-Jo Chung from Caltech's Aerospace Robotics and Control Lab explains how they did it:

Creatures that have more than two locomotion modes must learn and master how to properly switch between them. Birds, for instance, undergo a complex yet intriguing behavior at the transitional interface of their two locomotion modes of flying and walking. Similarly, the Leonardo robot uses synchronized control of distributed propeller-based thrusters and leg joints to realize smooth transitions between its flying and walking modes. In particular, the LEO robot follows a smooth flying trajectory up to the landing point prior to landing. The forward landing velocity is then matched to the chosen walking speed, and the walking phase is triggered when one foot touches the ground. After the touchdown, the robot continues to walk by tracking its walking trajectory. A state machine is run on-board LEO to allow for these smooth transitions, which are detected using contact sensors embedded in the foot.

It's very cool how Leo neatly solves some of the most difficult problems with bipedal robotics, including dynamic balancing and traversing large changes in height. And Leo can also do things that no biped (or human) can do, like actually fly short distances. As a multimodal hybrid of a bipedal robot and a drone, though, it's important to note that Leo's design includes some significant compromises as well. The robot has to be very lightweight in order to fly at all, which limits how effective it can be as a biped without using its thrusters for assistance. And because so much of its balancing requires active input from the thrusters, it's very inefficient relative to both drones and other bipedal robots.

When walking on the ground, LEO (which weighs 2.5kg and is 75cm tall) sucks down 544 watts, of which 445 watts go to the propellers and 99 watts are used by the electronics and legs. When flying, LEO's power consumption almost doubles, but it's obviously much faster—the robot has a cost of transport (a measure of efficiency of self-movement) of 108 when walking at a speed of 20 cm/s, dropping to 15.5 when flying at 3 m/s. Compare this to the cost of transport for an average human, which is well under 1, or a typical quadrupedal robot, which is in the low single digits. The most efficient humanoid we've ever seen, SRI's DURUS, has a cost of transport of about 1, whereas the rumor is that the cost of transport for a robot like Atlas is closer to 20.

Long term, this low efficiency could be a problem for LEO, since its battery life is good for only about 100 seconds of flight or 3.5 minutes of walking. But, explains Soon-Jo Chung, efficiency hasn't yet been a priority, and there's more that can potentially be done to improve LEO's performance, although always with some compromises:

The extreme balancing ability of LEO comes at the cost of continuously running propellers, which leads to higher energy consumption than leg-based ground robots. However, this stabilization with propellers allowed the use of low-power leg servo motors and lightweight legs with flexibility, which was a design choice to minimize the overall weight of LEO to improve its flying performance.
There are possible ways to improve the energy efficiency by making different design tradeoffs. For instance, LEO could walk with the reduced support from the propellers by adopting finite feet for better stability or higher power [leg] motors with torque control for joint actuation that would allow for fast and accurate enough foot position tracking to stabilize the walking gait. In such a case, propellers may need to turn on only when the legs fail to maintain stability on the ground without having to run continuously. These solutions would cause a weight increase and lead to a higher energy consumption during flight maneuvers, but they would lower energy consumption during walking. In the case of LEO, we aimed to achieve balanced aerial and ground locomotion capabilities, and we opted for lightweight legs. Achieving efficient walking with lightweight legs similar to LEO's is still an open challenge in the field of bipedal robots, and it remains to be investigated in future work.

A rendering of a future version of LEO with fancy yellow skins

At this point in its development, the Caltech researchers have been focusing primarily on LEO's mobility systems, but they hope to get LEO doing useful stuff out in the world, and that almost certainly means giving the robot autonomy and manipulation capabilities. At the moment, LEO isn't particularly autonomous, in the sense that it follows predefined paths and doesn't decide on its own whether it should be using walking or flying to traverse a given obstacle. But the researchers are already working on ways in which LEO can make these decisions autonomously through vision and machine learning.

As for manipulation, Chung tells us that “a new version of LEO could be appended with lightweight manipulators that have similar linkage design to its legs and servo motors to expand the range of tasks it can perform,” with the goal of “enabling a wide range of robotic missions that are hard to accomplish by the sole use of ground or aerial robots.”

Perhaps the most well-suited applications for LEO would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and could use robotic workers. For instance, high voltage line inspection or monitoring of tall bridges could be good applications for LEO, and LEO has an onboard camera that can be used for such purposes. In such applications, conventional biped robots have difficulties with reaching the site, and standard multi-rotor drones have an issue with stabilization in high disturbance environments. LEO uses the ground contact to its advantage and, compared to a standard multi-rotor, is more resistant to external disturbances such as wind. This would improve the safety of the robot operation in an outdoor environment where LEO can maintain contact with a rigid surface.
It's also tempting to look at LEO's ability to more or less just bypass so many of the challenges in bipedal robotics and think about ways in which it could be useful in places where bipedal robots tend to struggle. But it's important to remember that because of the compromises inherent in its multimodal design, LEO will likely be best suited for very specific tasks that can most directly leverage what it's particularly good at. High voltage line and bridge inspection is a good start, and you can easily imagine other inspection tasks that require stability combined with vertical agility. Hopefully, improvements in efficiency and autonomy will make this possible, although I'm still holding out for what Caltech's Chung originally promised: “the ultimate form of demonstration for us will be to build two of these Leonardo robots and then have them play tennis or badminton.” Continue reading

Posted in Human Robots

#439804 How Quantum Computers Can Be Used to ...

Using computer simulations to design new chips played a crucial role in the rapid improvements in processor performance we’ve experienced in recent decades. Now Chinese researchers have extended the approach to the quantum world.

Electronic design automation tools started to become commonplace in the early 1980s as the complexity of processors rose exponentially, and today they are an indispensable tool for chip designers.

More recently, Google has been turbocharging the approach by using artificial intelligence to design the next generation of its AI chips. This holds the promise of setting off a process of recursive self-improvement that could lead to rapid performance gains for AI.

Now, New Scientist has reported on a team from the University of Science and Technology of China in Shanghai that has applied the same ideas to another emerging field of computing: quantum processors. In a paper posted to the arXiv pre-print server, the researchers describe how they used a quantum computer to design a new type of qubit that significantly outperformed their previous design.

“Simulations of high-complexity quantum systems, which are intractable for classical computers, can be efficiently done with quantum computers,” the authors wrote. “Our work opens the way to designing advanced quantum processors using existing quantum computing resources.”

At the heart of the idea is the fact that the complexity of quantum systems grows exponentially as they increase in size. As a result, even the most powerful supercomputers struggle to simulate fairly small quantum systems.

This was the basis for Google’s groundbreaking display of “quantum supremacy” in 2019. The company’s researchers used a 53-qubit processor to run a random quantum circuit a million times and showed that it would take roughly 10,000 years to simulate the experiment on the world’s fastest supercomputer.

This means that using classical computers to help in the design of new quantum computers is likely to hit fundamental limits pretty quickly. Using a quantum computer, however, sidesteps the problem because it can exploit the same oddities of the quantum world that make the problem complex in the first place.

This is exactly what the Chinese researchers did. They used an algorithm called a variational quantum eigensolver to simulate the kind of superconducting electronic circuit found at the heart of a quantum computer. This was used to explore what happens when certain energy levels in the circuit are altered.

Normally this kind of experiment would require them to build large numbers of physical prototypes and test them, but instead the team was able to rapidly model the impact of the changes. The upshot was that the researchers discovered a new type of qubit that was more powerful than the one they were already using.

Any two-level quantum system can act as a qubit, but most superconducting quantum computers use transmons, which encode quantum states into the oscillations of electrons. By tweaking the energy levels of their simulated quantum circuit, the researchers were able to discover a new qubit design they dubbed a plasonium.

It is less than half the size of a transmon, and when the researchers fabricated it they found that it holds its quantum state for longer and is less prone to errors. It still works on similar principles to the transmon, so it’s possible to manipulate it using the same control technologies.

The researchers point out that this is only a first prototype, so with further optimization and the integration of recent progress in new superconducting materials and surface treatment methods they expect performance to increase even more.

But the new qubit the researchers have designed is probably not their most significant contribution. By demonstrating that even today’s rudimentary quantum computers can help design future devices, they’ve opened the door to a virtuous cycle that could significantly speed innovation in this field.

Image Credit: Pete Linforth from Pixabay Continue reading

Posted in Human Robots