Tag Archives: and

#440044 Robotics and artificial intelligence to ...

A Universidad Carlos III de Madrid (UC3M) spin-off, Inrobics Social Robotics, S.L.L., has developed a robotic device that provides an innovative motor and cognitive rehabilitation service that can be used at health centers as well as at home. Inrobics was created using research results from the University's Department of Computer Science and Engineering. Continue reading

Posted in Human Robots

#439913 A system to control robotic arms based ...

For people with motor impairments or physical disabilities, completing daily tasks and house chores can be incredibly challenging. Recent advancements in robotics, such as brain-controlled robotic limbs, have the potential to significantly improve their quality of life. Continue reading

Posted in Human Robots

#439857 Video Friday: ANYmals and Animals

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

This project investigates the interaction between robots and animals, in particular, the quadruped ANYmal and wild vervet monkeys. We will test whether robots can be tolerated but also socially accepted in a group of vervets. We will evaluate whether social bonds are created between them and whether vervets trust knowledge from robots.

[ RSL ]

At this year's ACM Symposium on User Interface Software and Technology (UIST), the Student Innovation Contest was based around Sony Toio robots. Here are some of the things that teams came up with:

[ UIST ]

Collecting samples from Mars and bringing them back to Earth will be a historic undertaking that started with the launch of NASA's Perseverance rover on July 30, 2020. Perseverance collected its first rock core samples in September 2021. The rover will leave them on Mars for a future mission to retrieve and return to Earth. NASA and the European Space Agency (ESA) are solidifying concepts for this proposed Mars Sample Return campaign. The current concept includes a lander, a fetch rover, an ascent vehicle to launch the sample container to Martian orbit, and a retrieval spacecraft with a payload for capturing and containing the samples and then sending them back to Earth to land in an unpopulated area.

[ JPL ]

FCSTAR is a minimally actuated flying climbing robot capable of crawling vertically. It is the latest in the family of the STAR robots. Designed and built at the Bio-Inspired and Medical Robotics Lab at the Ben Gurion University of the Negev by Nitzan Ben David and David Zarrouk.

[ BGU ]

Evidently the novelty of Spot has not quite worn off yet.

[ IRL ]

As much as I like Covariant, it seems weird to call a robot like this “Waldo” when the world waldo already has a specific meaning in robotics, thanks to the short story by Robert A. Heinlein.

Also, kinda looks like it failed that very first pick in the video…?

[ Covariant ]

Thanks, Alice!

Here is how I will be assembling the Digit that I'm sure Agility Robotics will be sending me any day now.

[ Agility Robotics ]

Robotis would like to remind you that ROS World is next week, and also that they make a lot of ROS-friendly robots!

[ ROS World ] via [ Robotis ]

Researchers at the Australian UTS School of Architecture have partnered with construction design firm BVN Architecture to develop a unique 3D printed air-diffusion system.

[ UTS ]

Team MARBLE, who took third at the DARPA SubT Challenge, has put together this video which combines DARPA's videos with footage taken by the team to tell the whole story with some behind the scenes stuff thrown in.

[ MARBLE ]

You probably don't need to watch all 10 minutes of the first public flight of Volocopter's cargo drone, but it's fun to see the propellers spin up for the takeoff.

[ Volocopter ]

Nothing new in this video about Boston Dynamics from CNBC, but it's always cool to see a little wander around their headquarters.

[ CNBC ]

Computing power doubles every two years, an observation known as Moore's Law. Prof Maarten Steinbuch, a high-tech systems scientist, entrepreneur and communicator, from Eindhoven University of Technology, discussed how this exponential rate of change enables accelerating developments in sensor technology, AI computing and automotive machines, to make products in modern factories that will soon be smart and self-learning.

[ ESA ]

On episode three of The Robot Brains Podcast, we have deep learning pioneer: Yann LeCun. Yann is a winner of the Turing Award (often called the Nobel Prize of Computer Science) who in 2013 was handpicked by Mark Zuckerberg to bring AI to Facebook. Yann also offers his predictions for the future of artificial general intelligence, talks about his life straddling the worlds of academia and business and explains why he likes to picture AI as a chocolate layer cake with a cherry on top.

[ Robot Brains ]

This week's CMU RI seminar is from Tom Howard at the University of Rochester, on “Enabling Grounded Language Communication for Human-Robot Teaming.”

[ CMU RI ]

A pair of talks from the Maryland Robotics Center, including Maggie Wigness from ARL and Dieter Fox from UW and NVIDIA.

[ Maryland Robotics ] Continue reading

Posted in Human Robots

#439847 Tiny hand-shaped gripper can grasp and ...

A team of researchers affiliated with a host of institutions in the Republic of Korea has developed a tiny, soft robotic hand that can grasp small objects and measure their temperature. They have published their results in the journal Science Robotics. Continue reading

Posted in Human Robots

#439820 How Musicologists and Scientists Used AI ...

When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.

Ever since then, Beethoven fans and musicologists have puzzled and lamented over what could have been. His notes teased at some magnificent reward, albeit one that seemed forever out of reach.

Now, thanks to the work of a team of music historians, musicologists, composers and computer scientists, Beethoven’s vision will come to life.

I presided over the artificial intelligence side of the project, leading a group of scientists at the creative AI startup Playform AI that taught a machine both Beethoven’s entire body of work and his creative process.

A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany—the culmination of a two-year-plus effort.

Past Attempts Hit a Wall
Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo.

Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.”

But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down.

A page of Beethoven’s notes for his planned 10th Symphony. Image Credit: Beethoven House Museum, CC BY-SA

There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision.

Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement.

Assembling the Team
In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven.

The challenge seemed daunting. To pull it off, AI would need to do something it had never done before. But I said I would give it a shot.

Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained.

The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach.

The Project Takes Shape
In June 2019, the group gathered for a two-day workshop at Harvard’s music library. In a large room with a piano, a blackboard and a stack of Beethoven’s sketchbooks spanning most of his known works, we talked about how fragments could be turned into a complete piece of music and how AI could help solve this puzzle, while still remaining faithful to Beethoven’s process and vision.

The music experts in the room were eager to learn more about the sort of music AI had created in the past. I told them how AI had successfully generated music in the style of Bach. However, this was only a harmonization of an inputted melody that sounded like Bach. It didn’t come close to what we needed to do: construct an entire symphony from a handful of phrases.

Meanwhile, the scientists in the room—myself included—wanted to learn about what sort of materials were available, and how the experts envisioned using them to complete the symphony.

The task at hand eventually crystallized. We would need to use notes and completed compositions from Beethoven’s entire body of work—along with the available sketches from the 10th Symphony—to create something that Beethoven himself might have written.

This was a tremendous challenge. We didn’t have a machine that we could feed sketches to, push a button and have it spit out a symphony. Most AI available at the time couldn’t continue an uncompleted piece of music beyond a few additional seconds.

We would need to push the boundaries of what creative AI could do by teaching the machine Beethoven’s creative process—how he would take a few bars of music and painstakingly develop them into stirring symphonies, quartets, and sonatas.

Piecing Together Beethoven’s Creative Process
As the project progressed, the human side and the machine side of the collaboration evolved. Werzowa, Gotham, Levin, and Röder deciphered and transcribed the sketches from the 10th Symphony, trying to understand Beethoven’s intentions. Using his completed symphonies as a template, they attempted to piece together the puzzle of where the fragments of sketches should go—which movement, which part of the movement.

They had to make decisions, like determining whether a sketch indicated the starting point of a scherzo, which is a very lively part of the symphony, typically in the third movement. Or they might determine that a line of music was likely the basis of a fugue, which is a melody created by interweaving parts that all echo a central theme.

The AI side of the project—my side—found itself grappling with a range of challenging tasks.

First, and most fundamentally, we needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.

Next, because the continuation of a phrase also needs to follow a certain musical form, whether it’s a scherzo, trio, or fugue, the AI needed to learn Beethoven’s process for developing these forms.

The to-do list grew: We had to teach the AI how to take a melodic line and harmonize it. The AI needed to learn how to bridge two sections of music together. And we realized the AI had to be able to compose a coda, which is a segment that brings a section of a piece of music to its conclusion.

Finally, once we had a full composition, the AI was going to have to figure out how to orchestrate it, which involves assigning different instruments for different parts.
And it had to pull off these tasks in the way Beethoven might do so.

Passing the First Big Test
In November 2019, the team met in person again—this time, in Bonn, at the Beethoven House Museum, where the composer was born and raised.

This meeting was the litmus test for determining whether AI could complete this project. We printed musical scores that had been developed by AI and built off the sketches from Beethoven’s 10th. A pianist performed in a small concert hall in the museum before a group of journalists, music scholars, and Beethoven experts.

Journalists and musicians gather to hear a pianist perform parts of Beethoven’s 10th Symphony. Image Credit: Ahmed Elgammal, CC BY-SA

We challenged the audience to determine where Beethoven’s phrases ended and where the AI extrapolation began. They couldn’t.

A few days later, one of these AI-generated scores was played by a string quartet in a news conference. Only those who intimately knew Beethoven’s sketches for the 10th Symphony could determine when the AI-generated parts came in.

The success of these tests told us we were on the right track. But these were just a couple of minutes of music. There was still much more work to do.

Ready for the World
At every point, Beethoven’s genius loomed, challenging us to do better. As the project evolved, the AI did as well. Over the ensuing 18 months, we constructed and orchestrated two entire movements of more than 20 minutes apiece.

We anticipate some pushback to this work—those who will say that the arts should be off-limits from AI, and that AI has no business trying to replicate the human creative process. Yet when it comes to the arts, I see AI not as a replacement, but as a tool—one that opens doors for artists to express themselves in new ways.

This project would not have been possible without the expertise of human historians and musicians. It took an immense amount of work—and, yes, creative thinking—to accomplish this goal.

At one point, one of the music experts on the team said that the AI reminded him of an eager music student who practices every day, learns, and becomes better and better.

Now that student, having taken the baton from Beethoven, is ready to present the 10th Symphony to the world.

The piece above is a selection from Beethoven’s 10th Symphony. YouTube/Modern Recordings, CC BY-SA 3.38 MB (download)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Circe Denyer Continue reading

Posted in Human Robots