Tag Archives: test
#432646 How Fukushima Changed Japanese Robotics ...
In March 2011, Japan was hit by a catastrophic earthquake that triggered a terrible tsunami. Thousands were killed and billions of dollars of damage was done in one of the worst disasters of modern times. For a few perilous weeks, though, the eyes of the world were focused on the Fukushima Daiichi nuclear power plant. Its safety systems were unable to cope with the tsunami damage, and there were widespread fears of another catastrophic meltdown that could spread radiation over several countries, like the Chernobyl disaster in the 1980s. A heroic effort that included dumping seawater into the reactor core prevented an even bigger catastrophe. As it is, a hundred thousand people are still evacuated from the area, and it will likely take many years and hundreds of billions of dollars before the region is safe.
Because radiation is so dangerous to humans, the natural solution to the Fukushima disaster was to send in robots to monitor levels of radiation and attempt to begin the clean-up process. The techno-optimists in Japan had discovered a challenge, deep in the heart of that reactor core, that even their optimism could not solve. The radiation fried the circuits of the robots that were sent in, even those specifically designed and built to deal with the Fukushima catastrophe. The power plant slowly became a vast robot graveyard. While some robots initially saw success in measuring radiation levels around the plant—and, recently, a robot was able to identify the melted uranium fuel at the heart of the disaster—hopes of them playing a substantial role in the clean-up are starting to diminish.
In Tokyo’s neon Shibuya district, it can sometimes seem like it’s brighter at night than it is during the daytime. In karaoke booths on the twelfth floor—because everything is on the twelfth floor—overlooking the brightly-lit streets, businessmen unwind by blasting out pop hits. It can feel like the most artificial place on Earth; your senses are dazzled by the futuristic techno-optimism. Stock footage of the area has become symbolic of futurism and modernity.
Japan has had a reputation for being a nation of futurists for a long time. We’ve already described how tech giant Softbank, headed by visionary founder Masayoshi Son, is investing billions in a technological future, including plans for the world’s largest solar farm.
When Google sold pioneering robotics company Boston Dynamics in 2017, Softbank added it to their portfolio, alongside the famous Nao and Pepper robots. Some may think that Son is taking a gamble in pursuing a robotics project even Google couldn’t succeed in, but this is a man who lost nearly everything in the dot-com crash of 2000. The fact that even this reversal didn’t dent his optimism and faith in technology is telling. But how long can it last?
The failure of Japan’s robots to deal with the immense challenge of Fukushima has sparked something of a crisis of conscience within the industry. Disaster response is an obvious stepping-stone technology for robots. Initially, producing a humanoid robot will be very costly, and the robot will be less capable than a human; building a robot to wait tables might not be particularly economical yet. Building a robot to do jobs that are too dangerous for humans is far more viable. Yet, at Fukushima, in one of the most advanced nations in the world, many of the robots weren’t up to the task.
Nowhere was this crisis more felt than Honda; the company had developed ASIMO, which stunned the world in 2000 and continues to fascinate as an iconic humanoid robot. Despite all this technological advancement, however, Honda knew that ASIMO was still too unreliable for the real world.
It was Fukushima that triggered a sea-change in Honda’s approach to robotics. Two years after the disaster, there were rumblings that Honda was developing a disaster robot, and in October 2017, the prototype was revealed to the public for the first time. It’s not yet ready for deployment in disaster zones, however. Interestingly, the creators chose not to give it dexterous hands but instead to assume that remotely-operated tools fitted to the robot would be a better solution for the range of circumstances it might encounter.
This shift in focus for humanoid robots away from entertainment and amusement like ASIMO, and towards being practically useful, has been mirrored across the world.
In 2015, also inspired by the Fukushima disaster and the lack of disaster-ready robots, the DARPA Robotics Challenge tested humanoid robots with a range of tasks that might be needed in emergency response, such as driving cars, opening doors, and climbing stairs. The Terminator-like ATLAS robot from Boston Dynamics, alongside Korean robot HUBO, took many of the plaudits, and CHIMP also put in an impressive display by being able to right itself after falling.
Yet the DARPA Robotics Challenge showed us just how far the robots are from truly being as useful as we’d like, or maybe even as we would imagine. Many robots took hours to complete the tasks, which were highly idealized to suit them. Climbing stairs proved a particular challenge. Those who watched were more likely to see a robot that had fallen over, struggling to get up, rather than heroic superbots striding in to save the day. The “striding” proved a particular problem, with the fastest robot HUBO managing this by resorting to wheels in its knees when the legs weren’t necessary.
Fukushima may have brought a sea-change over futuristic Japan, but before robots will really begin to enter our everyday lives, they will need to prove their worth. In the interim, aerial drone robots designed to examine infrastructure damage after disasters may well see earlier deployment and more success.
It’s a considerable challenge.
Building a humanoid robot is expensive; if these multi-million-dollar machines can’t help in a crisis, people may begin to question the worth of investing in them in the first place (unless your aim is just to make viral videos). This could lead to a further crisis of confidence among the Japanese, who are starting to rely on humanoid robotics as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.
But if they continue to fail when put to the test, that will raise serious concerns. In Tokyo’s Akihabara district, you can see all kinds of flash robotic toys for sale in the neon-lit superstores, and dancing, acting robots like Robothespian can entertain crowds all over the world. But if we want these machines to be anything more than toys—partners, helpers, even saviors—more work needs to be done.
At the same time, those who participated in the DARPA Robotics Challenge in 2015 won’t be too concerned if people were underwhelmed by the performance of their disaster relief robots. Back in 2004, nearly every participant in the DARPA Grand Challenge crashed, caught fire, or failed on the starting line. To an outside observer, the whole thing would have seemed like an unmitigated disaster, and a pointless investment. What was the task in 2004? Developing a self-driving car. A lot can change in a decade.
Image Credit: MARCUSZ2527 / Shutterstock.com Continue reading
#432487 Can We Make a Musical Turing Test?
As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.
At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.
But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?
Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.
Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)
Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.
Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).
There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.
Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:
The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.
We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?
And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?
Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.
Image Credit: d1sk / Shutterstock.com Continue reading