Tag Archives: robotic

#432519 Robot Cities: Three Urban Prototypes for ...

Before I started working on real-world robots, I wrote about their fictional and historical ancestors. This isn’t so far removed from what I do now. In factories, labs, and of course science fiction, imaginary robots keep fueling our imagination about artificial humans and autonomous machines.

Real-world robots remain surprisingly dysfunctional, although they are steadily infiltrating urban areas across the globe. This fourth industrial revolution driven by robots is shaping urban spaces and urban life in response to opportunities and challenges in economic, social, political, and healthcare domains. Our cities are becoming too big for humans to manage.

Good city governance enables and maintains smooth flow of things, data, and people. These include public services, traffic, and delivery services. Long queues in hospitals and banks imply poor management. Traffic congestion demonstrates that roads and traffic systems are inadequate. Goods that we increasingly order online don’t arrive fast enough. And the WiFi often fails our 24/7 digital needs. In sum, urban life, characterized by environmental pollution, speedy life, traffic congestion, connectivity and increased consumption, needs robotic solutions—or so we are led to believe.

Is this what the future holds? Image Credit: Photobank gallery / Shutterstock.com
In the past five years, national governments have started to see automation as the key to (better) urban futures. Many cities are becoming test beds for national and local governments for experimenting with robots in social spaces, where robots have both practical purpose (to facilitate everyday life) and a very symbolic role (to demonstrate good city governance). Whether through autonomous cars, automated pharmacists, service robots in local stores, or autonomous drones delivering Amazon parcels, cities are being automated at a steady pace.

Many large cities (Seoul, Tokyo, Shenzhen, Singapore, Dubai, London, San Francisco) serve as test beds for autonomous vehicle trials in a competitive race to develop “self-driving” cars. Automated ports and warehouses are also increasingly automated and robotized. Testing of delivery robots and drones is gathering pace beyond the warehouse gates. Automated control systems are monitoring, regulating and optimizing traffic flows. Automated vertical farms are innovating production of food in “non-agricultural” urban areas around the world. New mobile health technologies carry promise of healthcare “beyond the hospital.” Social robots in many guises—from police officers to restaurant waiters—are appearing in urban public and commercial spaces.

Vertical indoor farm. Image Credit: Aisyaqilumaranas / Shutterstock.com
As these examples show, urban automation is taking place in fits and starts, ignoring some areas and racing ahead in others. But as yet, no one seems to be taking account of all of these various and interconnected developments. So, how are we to forecast our cities of the future? Only a broad view allows us to do this. To give a sense, here are three examples: Tokyo, Dubai, and Singapore.

Tokyo
Currently preparing to host the Olympics 2020, Japan’s government also plans to use the event to showcase many new robotic technologies. Tokyo is therefore becoming an urban living lab. The institution in charge is the Robot Revolution Realization Council, established in 2014 by the government of Japan.

Tokyo: city of the future. Image Credit: ESB Professional / Shutterstock.com
The main objectives of Japan’s robotization are economic reinvigoration, cultural branding, and international demonstration. In line with this, the Olympics will be used to introduce and influence global technology trajectories. In the government’s vision for the Olympics, robot taxis transport tourists across the city, smart wheelchairs greet Paralympians at the airport, ubiquitous service robots greet customers in 20-plus languages, and interactively augmented foreigners speak with the local population in Japanese.

Tokyo shows us what the process of state-controlled creation of a robotic city looks like.

Singapore
Singapore, on the other hand, is a “smart city.” Its government is experimenting with robots with a different objective: as physical extensions of existing systems to improve management and control of the city.

In Singapore, the techno-futuristic national narrative sees robots and automated systems as a “natural” extension of the existing smart urban ecosystem. This vision is unfolding through autonomous delivery robots (the Singapore Post’s delivery drone trials in partnership with AirBus helicopters) and driverless bus shuttles from Easymile, EZ10.

Meanwhile, Singapore hotels are employing state-subsidized service robots to clean rooms and deliver linen and supplies, and robots for early childhood education have been piloted to understand how robots can be used in pre-schools in the future. Health and social care is one of the fastest growing industries for robots and automation in Singapore and globally.

Dubai
Dubai is another emerging prototype of a state-controlled smart city. But rather than seeing robotization simply as a way to improve the running of systems, Dubai is intensively robotizing public services with the aim of creating the “happiest city on Earth.” Urban robot experimentation in Dubai reveals that authoritarian state regimes are finding innovative ways to use robots in public services, transportation, policing, and surveillance.

National governments are in competition to position themselves on the global politico-economic landscape through robotics, and they are also striving to position themselves as regional leaders. This was the thinking behind the city’s September 2017 test flight of a flying taxi developed by the German drone firm Volocopter—staged to “lead the Arab world in innovation.” Dubai’s objective is to automate 25% of its transport system by 2030.

It is currently also experimenting with Barcelona-based PAL Robotics’ humanoid police officer and Singapore-based vehicle OUTSAW. If the experiments are successful, the government has announced it will robotize 25% of the police force by 2030.

While imaginary robots are fueling our imagination more than ever—from Ghost in the Shell to Blade Runner 2049—real-world robots make us rethink our urban lives.

These three urban robotic living labs—Tokyo, Singapore, Dubai—help us gauge what kind of future is being created, and by whom. From hyper-robotized Tokyo to smartest Singapore and happy, crime-free Dubai, these three comparisons show that, no matter what the context, robots are perceived as a means to achieve global futures based on a specific national imagination. Just like the films, they demonstrate the role of the state in envisioning and creating that future.

This article was originally published on The Conversation. Read the original article.

Image Credit: 3000ad / Shutterstock.com Continue reading

Posted in Human Robots

#432421 Cheetah III robot preps for a role as a ...

If you were to ask someone to name a new technology that emerged from MIT in the 21st century, there's a good chance they would name the robotic cheetah. Developed by the MIT Department of Mechanical Engineering's Biomimetic Robotics Lab under the direction of Associate Professor Sangbae Kim, the quadruped MIT Cheetah has made headlines for its dynamic legged gait, speed, jumping ability, and biomimetic design. Continue reading

Posted in Human Robots

#432352 Watch This Lifelike Robot Fish Swim ...

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL Continue reading

Posted in Human Robots

#432331 $10 million XPRIZE Aims for Robot ...

Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE Continue reading

Posted in Human Robots

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots