Tag Archives: human
#437905 New Deep Learning Method Helps Robots ...
One of the biggest things standing in the way of the robot revolution is their inability to adapt. That may be about to change though, thanks to a new approach that blends pre-learned skills on the fly to tackle new challenges.
Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.
The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.
Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.
But the approach requires huge amounts of data to solve even simple tasks. And most of the things we would want a robot to do are actually comprised of many smaller tasks—for instance, delivering a parcel involves learning how to pick an object up, how to walk, how to navigate, and how to pass an object to someone else, among other things.
Training all these sub-tasks simultaneously is hugely complex and far beyond the capabilities of most current AI systems, so many experiments so far have focused on narrow skills. Some have tried to train AI on multiple skills separately and then use an overarching system to flip between these expert sub-systems, but these approaches still can’t adapt to completely new challenges.
Building off this research, though, scientists have now created a new AI system that can blend together expert sub-systems specialized for a specific task. In a paper in Science Robotics, they explain how this allows a four-legged robot to improvise new skills and adapt to unfamiliar challenges in real time.
The technique, dubbed multi-expert learning architecture (MELA), relies on a two-stage training approach. First the researchers used a computer simulation to train two neural networks to carry out two separate tasks: trotting and recovering from a fall.
They then used the models these two networks learned as seeds for eight other neural networks specialized for more specific motor skills, like rolling over or turning left or right. The eight “expert networks” were trained simultaneously along with a “gating network,” which learns how to combine these experts to solve challenges.
Because the gating network synthesizes the expert networks rather than switching them on sequentially, MELA is able to come up with blends of different experts that allow it to tackle problems none could solve alone.
The authors liken the approach to training people in how to play soccer. You start out by getting them to do drills on individual skills like dribbling, passing, or shooting. Once they’ve mastered those, they can then intelligently combine them to deal with more dynamic situations in a real game.
After training the algorithm in simulation, the researchers uploaded it to a four-legged robot and subjected it to a battery of tests, both indoors and outdoors. The robot was able to adapt quickly to tricky surfaces like gravel or pebbles, and could quickly recover from being repeatedly pushed over before continuing on its way.
There’s still some way to go before the approach could be adapted for real-world commercially useful robots. For a start, MELA currently isn’t able to integrate visual perception or a sense of touch; it simply relies on feedback from the robot’s joints to tell it what’s going on around it. The more tasks you ask the robot to master, the more complex and time-consuming the training will get.
Nonetheless, the new approach points towards a promising way to make multi-skilled robots become more than the sum of their parts. As much fun as it is, it seems like laughing at compilations of clumsy robots may soon be a thing of the past.
Image Credit: Yang et al., Science Robotics Continue reading
#437901 How computer simulation will accelerate ...
Jeffrey C. Trinkle has always had a keen interest in robot hands. And, though it may be a long way off, Trinkle, who has studied robotics for more than thirty years, says he's most compelled by the prospect of robots performing “dexterous manipulation” at the level of a human “or beyond.” Continue reading
#437896 Solar-based Electronic Skin Generates ...
Replicating the human sense of touch is complicated—electronic skins need to be flexible, stretchable, and sensitive to temperature, pressure and texture; they need to be able to read biological data and provide electronic readouts. Therefore, how to power electronic skin for continuous, real-time use is a big challenge.
To address this, researchers from Glasgow University have developed an energy-generating e-skin made out of miniaturized solar cells, without dedicated touch sensors. The solar cells not only generate their own power—and some surplus—but also provide tactile capabilities for touch and proximity sensing. An early-view paper of their findings was published in IEEE Transactions on Robotics.
When exposed to a light source, the solar cells on the s-skin generate energy. If a cell is shadowed by an approaching object, the intensity of the light, and therefore the energy generated, reduces, dropping to zero when the cell makes contact with the object, confirming touch. In proximity mode, the light intensity tells you how far the object is with respect to the cell. “In real time, you can then compare the light intensity…and after calibration find out the distances,” says Ravinder Dahiya of the Bendable Electronics and Sensing Technologies (BEST) Group, James Watt School of Engineering, University of Glasgow, where the study was carried out. The team used infra-red LEDs with the solar cells for proximity sensing for better results.
To demonstrate their concept, the researchers wrapped a generic 3D-printed robotic hand in their solar skin, which was then recorded interacting with its environment. The proof-of-concept tests showed an energy surplus of 383.3 mW from the palm of the robotic arm. “The eSkin could generate more than 100 W if present over the whole body area,” they reported in their paper.
“If you look at autonomous, battery-powered robots, putting an electronic skin [that] is consuming energy is a big problem because then it leads to reduced operational time,” says Dahiya. “On the other hand, if you have a skin which generates energy, then…it improves the operational time because you can continue to charge [during operation].” In essence, he says, they turned a challenge—how to power the large surface area of the skin—into an opportunity—by turning it into an energy-generating resource.
Dahiya envisages numerous applications for BEST’s innovative e-skin, given its material-integrated sensing capabilities, apart from the obvious use in robotics. For instance, in prosthetics: “[As] we are using [a] solar cell as a touch sensor itself…we are also [making it] less bulkier than other electronic skins.” This, he adds, will help create prosthetics that are of optimal weight and size, thus making it easier for prosthetics users. “If you look at electronic skin research, the the real action starts after it makes contact… Solar skin is a step ahead, because it will start to work when the object is approaching…[and] have more time to prepare for action.” This could effectively reduce the time lag that is often seen in brain–computer interfaces.
There are also possibilities in the automation sector, particularly in electrical and interactive vehicles. A car covered with solar e-skin, because of its proximity-sensing capabilities, would be able to “see” an approaching obstacle or a person. It isn’t “seeing” in the biological sense, Dahiya clarifies, but from the point of view of a machine. This can be integrated with other objects, not just cars, for a variety of uses. “Gestures can be recognized as well…[which] could be used for gesture-based control…in gaming or in other sectors.”
In the lab, tests were conducted with a single source of white light at 650 lux, but Dahiya feels there are interesting possibilities if they could work with multiple light sources that the e-skin could differentiate between. “We are exploring different AI techniques [for that],” he says, “processing the data in an innovative way [so] that we can identify the the directions of the light sources as well as the object.”
The BEST team’s achievement brings us closer to a flexible, self-powered, cost-effective electronic skin that can touch as well as “see.” At the moment, however, there are still some challenges. One of them is flexibility. In their prototype, they used commercial solar cells made of amorphous silicon, each 1cm x 1cm. “They are not flexible, but they are integrated on a flexible substrate,” Dahiya says. “We are currently exploring nanowire-based solar cells…[with which] we we hope to achieve good performance in terms of energy as well as sensing functionality.” Another shortcoming is what Dahiya calls “the integration challenge”—how to make the solar skin work with different materials. Continue reading