Tag Archives: creators

#432646 How Fukushima Changed Japanese Robotics ...

In March 2011, Japan was hit by a catastrophic earthquake that triggered a terrible tsunami. Thousands were killed and billions of dollars of damage was done in one of the worst disasters of modern times. For a few perilous weeks, though, the eyes of the world were focused on the Fukushima Daiichi nuclear power plant. Its safety systems were unable to cope with the tsunami damage, and there were widespread fears of another catastrophic meltdown that could spread radiation over several countries, like the Chernobyl disaster in the 1980s. A heroic effort that included dumping seawater into the reactor core prevented an even bigger catastrophe. As it is, a hundred thousand people are still evacuated from the area, and it will likely take many years and hundreds of billions of dollars before the region is safe.

Because radiation is so dangerous to humans, the natural solution to the Fukushima disaster was to send in robots to monitor levels of radiation and attempt to begin the clean-up process. The techno-optimists in Japan had discovered a challenge, deep in the heart of that reactor core, that even their optimism could not solve. The radiation fried the circuits of the robots that were sent in, even those specifically designed and built to deal with the Fukushima catastrophe. The power plant slowly became a vast robot graveyard. While some robots initially saw success in measuring radiation levels around the plant—and, recently, a robot was able to identify the melted uranium fuel at the heart of the disaster—hopes of them playing a substantial role in the clean-up are starting to diminish.



In Tokyo’s neon Shibuya district, it can sometimes seem like it’s brighter at night than it is during the daytime. In karaoke booths on the twelfth floor—because everything is on the twelfth floor—overlooking the brightly-lit streets, businessmen unwind by blasting out pop hits. It can feel like the most artificial place on Earth; your senses are dazzled by the futuristic techno-optimism. Stock footage of the area has become symbolic of futurism and modernity.

Japan has had a reputation for being a nation of futurists for a long time. We’ve already described how tech giant Softbank, headed by visionary founder Masayoshi Son, is investing billions in a technological future, including plans for the world’s largest solar farm.

When Google sold pioneering robotics company Boston Dynamics in 2017, Softbank added it to their portfolio, alongside the famous Nao and Pepper robots. Some may think that Son is taking a gamble in pursuing a robotics project even Google couldn’t succeed in, but this is a man who lost nearly everything in the dot-com crash of 2000. The fact that even this reversal didn’t dent his optimism and faith in technology is telling. But how long can it last?

The failure of Japan’s robots to deal with the immense challenge of Fukushima has sparked something of a crisis of conscience within the industry. Disaster response is an obvious stepping-stone technology for robots. Initially, producing a humanoid robot will be very costly, and the robot will be less capable than a human; building a robot to wait tables might not be particularly economical yet. Building a robot to do jobs that are too dangerous for humans is far more viable. Yet, at Fukushima, in one of the most advanced nations in the world, many of the robots weren’t up to the task.

Nowhere was this crisis more felt than Honda; the company had developed ASIMO, which stunned the world in 2000 and continues to fascinate as an iconic humanoid robot. Despite all this technological advancement, however, Honda knew that ASIMO was still too unreliable for the real world.

It was Fukushima that triggered a sea-change in Honda’s approach to robotics. Two years after the disaster, there were rumblings that Honda was developing a disaster robot, and in October 2017, the prototype was revealed to the public for the first time. It’s not yet ready for deployment in disaster zones, however. Interestingly, the creators chose not to give it dexterous hands but instead to assume that remotely-operated tools fitted to the robot would be a better solution for the range of circumstances it might encounter.

This shift in focus for humanoid robots away from entertainment and amusement like ASIMO, and towards being practically useful, has been mirrored across the world.

In 2015, also inspired by the Fukushima disaster and the lack of disaster-ready robots, the DARPA Robotics Challenge tested humanoid robots with a range of tasks that might be needed in emergency response, such as driving cars, opening doors, and climbing stairs. The Terminator-like ATLAS robot from Boston Dynamics, alongside Korean robot HUBO, took many of the plaudits, and CHIMP also put in an impressive display by being able to right itself after falling.

Yet the DARPA Robotics Challenge showed us just how far the robots are from truly being as useful as we’d like, or maybe even as we would imagine. Many robots took hours to complete the tasks, which were highly idealized to suit them. Climbing stairs proved a particular challenge. Those who watched were more likely to see a robot that had fallen over, struggling to get up, rather than heroic superbots striding in to save the day. The “striding” proved a particular problem, with the fastest robot HUBO managing this by resorting to wheels in its knees when the legs weren’t necessary.

Fukushima may have brought a sea-change over futuristic Japan, but before robots will really begin to enter our everyday lives, they will need to prove their worth. In the interim, aerial drone robots designed to examine infrastructure damage after disasters may well see earlier deployment and more success.

It’s a considerable challenge.

Building a humanoid robot is expensive; if these multi-million-dollar machines can’t help in a crisis, people may begin to question the worth of investing in them in the first place (unless your aim is just to make viral videos). This could lead to a further crisis of confidence among the Japanese, who are starting to rely on humanoid robotics as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.

But if they continue to fail when put to the test, that will raise serious concerns. In Tokyo’s Akihabara district, you can see all kinds of flash robotic toys for sale in the neon-lit superstores, and dancing, acting robots like Robothespian can entertain crowds all over the world. But if we want these machines to be anything more than toys—partners, helpers, even saviors—more work needs to be done.

At the same time, those who participated in the DARPA Robotics Challenge in 2015 won’t be too concerned if people were underwhelmed by the performance of their disaster relief robots. Back in 2004, nearly every participant in the DARPA Grand Challenge crashed, caught fire, or failed on the starting line. To an outside observer, the whole thing would have seemed like an unmitigated disaster, and a pointless investment. What was the task in 2004? Developing a self-driving car. A lot can change in a decade.

Image Credit: MARCUSZ2527 / Shutterstock.com Continue reading

Posted in Human Robots

#432352 Watch This Lifelike Robot Fish Swim ...

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL Continue reading

Posted in Human Robots

#432236 Why Hasn’t AI Mastered Language ...

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com Continue reading

Posted in Human Robots

#431908 CES 2018: Misty Robotics Introduces ...

From the creators of Sphero, this robot is designed to allow non-robotics programmers to create useful applications Continue reading

Posted in Human Robots

#431203 Could We Build a Blade Runner-Style ...

The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.
The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.
From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.
There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points and are not yet even close to the world of Blade Runner.
There are human-like robots built from scratch by assembling artificial sensors, motors, and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.
There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs and wearable and implantable devices. This technology is similarly very far away from matching our own body parts.
Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually program code to design everything we wish.
Soft robotics: a way forward?
But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies and, in particular, by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable, and flexible.
This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair, and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.
The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.
Softness is the keyword that brings humans and technologies closer together. Sensors, motors, and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.
Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing, and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.
It is impossible to predict when the robotic world of Blade Runner might arrive, and if it does, it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.
This article was originally published on The Conversation. Read the original article.
Image Credit: Dariush M / Shutterstock.com Continue reading

Posted in Human Robots