Tag Archives: work
#439073 There’s a ‘New’ Nirvana Song Out, ...
One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.
Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.
The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.
The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.
Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.
The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.
To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.
Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”
Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.
The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.
How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”
Image Credit: Edward Xu on Unsplash Continue reading
#439062 Xenobots 2.0: These Living Robots ...
The line between animals and machines was already getting blurry after a team of scientists and roboticists unveiled the first living robots last year. Now the same team has released version 2.0 of their so-called xenobots, and they’re faster, stronger, and more capable than ever.
In January 2020, researchers from Tufts University and the University of Vermont laid out a method for building tiny biological machines out of the eggs of the African claw frog Xenopus laevis. Dubbed xenobots after their animal forebear, they could move independently, push objects, and even team up to create swarms.
Remarkably, building them involved no genetic engineering. Instead, the team used an evolutionary algorithm running on a supercomputer to test out thousands of potential designs made up of different configurations of cells.
Once they’d found some promising candidates that could solve the tasks they were interested in, they used microsurgical tools to build real-world versions out of living cells. The most promising design was built by splicing heart muscle cells (which could contract to propel the xenobots), and skin cells (which provided a rigid support).
Impressive as that might sound, having to build each individual xenobot by hand is obviously tedious. But now the team has devised a new approach that works from the bottom up by getting the xenobots to self-assemble their bodies from single cells. Not only is the approach more scalable, the new xenobots are faster, live longer, and even have a rudimentary memory.
In a paper in Science Robotics, the researchers describe how they took stem cells from frog embryos and allowed them to grow into clumps of several thousand cells called spheroids. After a few days, the stem cells had turned into skin cells covered in small hair-like projections called cilia, which wriggle back and forth.
Normally, these structures are used to spread mucus around on the frog’s skin. But when divorced from their normal context they took on a function more similar to that seen in microorganisms, which use cilia to move about by acting like tiny paddles.
“We are witnessing the remarkable plasticity of cellular collectives, which build a rudimentary new ‘body’ that is quite distinct from their default—in this case, a frog—despite having a completely normal genome,” corresponding author Michael Levin from Tufts University said in a press release.
“We see that cells can re-purpose their genetically encoded hardware, like cilia, for new functions such as locomotion. It is amazing that cells can spontaneously take on new roles and create new body plans and behaviors without long periods of evolutionary selection for those features,” he said.
Not only were the new xenobots faster and longer-lived, they were also much better at tasks like working together as a swarm to gather piles of iron oxide particles. And while the form and function of the xenobots was achieved without any genetic engineering, in an extra experiment the team injected them with RNA that caused them to produce a fluorescent protein that changes color when exposed to a particular color of light.
This allowed the xenobots to record whether they had come into contact with a specific light source while traveling about. The researchers say this is a proof of principle that the xenobots can be imbued with a molecular memory, and future work could allow them to record multiple stimuli and potentially even react to them.
What exactly these xenobots could eventually be used for is still speculative, but they have features that make them a promising alternative to non-organic alternatives. For a start, robots made of stem cells are completely biodegradable and also have their own power source in the form of “yolk platelets” found in all amphibian embryos. They are also able to self-heal in as little as five minutes if cut, and can take advantage of cells’ ability to process all kinds of chemicals.
That suggests they could have applications in everything from therapeutics to environmental engineering. But the researchers also hope to use them to better understand the processes that allow individual cells to combine and work together to create a larger organism, and how these processes might be harnessed and guided for regenerative medicine.
As these animal-machine hybrids advance, they are sure to raise ethical concerns and question marks over the potential risks. But it looks like the future of robotics could be a lot more wet and squishy than we imagined.
Image Credit: Doug Blackiston/Tufts University Continue reading
#439053 Bipedal Robots Are Learning To Move With ...
Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.
Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.
It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.
As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time.
Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.
IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?
Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.
Why is this an important problem for bipedal humanoid robots?
Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.
How can LOLA decide whether a surface is suitable for multi-contact locomotion?
LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.
Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.
Is LOLA capable of both proactive and reactive multi-contact locomotion?
The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.
What are some examples of unique capabilities that you are working towards with LOLA?
One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.
Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard.
A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.
While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.
We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too. Continue reading
#439051 ‘Neutrobots’ smuggle drugs ...
A team of researchers from the Harbin Institute of Technology along with partners at the First Affiliated Hospital of Harbin Medical University, both in China, has developed a tiny robot that can ferry cancer drugs through the blood-brain barrier (BBB) without setting off an immune reaction. In their paper published in the journal Science Robotics, the group describes their robot and tests with mice. Junsun Hwang and Hongsoo Choi, with the Daegu Gyeongbuk Institute of Science and Technology in Korea, have published a Focus piece in the same journal issue on the work done by the team in China. Continue reading