Tag Archives: metal
#436526 Not Bot, Not Beast: Scientists Create ...
A remarkable combination of artificial intelligence (AI) and biology has produced the world’s first “living robots.”
This week, a research team of roboticists and scientists published their recipe for making a new lifeform called xenobots from stem cells. The term “xeno” comes from the frog cells (Xenopus laevis) used to make them.
One of the researchers described the creation as “neither a traditional robot nor a known species of animal,” but a “new class of artifact: a living, programmable organism.”
Xenobots are less than 1 millimeter long and made of 500-1,000 living cells. They have various simple shapes, including some with squat “legs.” They can propel themselves in linear or circular directions, join together to act collectively, and move small objects. Using their own cellular energy, they can live up to 10 days.
While these “reconfigurable biomachines” could vastly improve human, animal, and environmental health, they raise legal and ethical concerns.
Strange New ‘Creature’
To make xenobots, the research team used a supercomputer to test thousands of random designs of simple living things that could perform certain tasks.
The computer was programmed with an AI “evolutionary algorithm” to predict which organisms would likely display useful tasks, such as moving towards a target.
After the selection of the most promising designs, the scientists attempted to replicate the virtual models with frog skin or heart cells, which were manually joined using microsurgery tools. The heart cells in these bespoke assemblies contract and relax, giving the organisms motion.
The creation of xenobots is groundbreaking. Despite being described as “programmable living robots,” they are actually completely organic and made of living tissue. The term “robot” has been used because xenobots can be configured into different forms and shapes, and “programmed” to target certain objects, which they then unwittingly seek. They can also repair themselves after being damaged.
Possible Applications
Xenobots may have great value. Some speculate they could be used to clean our polluted oceans by collecting microplastics. Similarly, they may be used to enter confined or dangerous areas to scavenge toxins or radioactive materials. Xenobots designed with carefully shaped “pouches” might be able to carry drugs into human bodies.
Future versions may be built from a patient’s own cells to repair tissue or target cancers. Being biodegradable, xenobots would have an edge on technologies made of plastic or metal.
Further development of biological “robots” could accelerate our understanding of living and robotic systems. Life is incredibly complex, so manipulating living things could reveal some of life’s mysteries—and improve our use of AI.
Legal and Ethical Questions
Conversely, xenobots raise legal and ethical concerns. In the same way they could help target cancers, they could also be used to hijack life functions for malevolent purposes.
Some argue artificially making living things is unnatural, hubristic, or involves “playing God.” A more compelling concern is that of unintended or malicious use, as we have seen with technologies in fields including nuclear physics, chemistry, biology and AI. For instance, xenobots might be used for hostile biological purposes prohibited under international law.
More advanced future xenobots, especially ones that live longer and reproduce, could potentially “malfunction” and go rogue, and out-compete other species.
For complex tasks, xenobots may need sensory and nervous systems, possibly resulting in their sentience. A sentient programmed organism would raise additional ethical questions. Last year, the revival of a disembodied pig brain elicited concerns about different species’ suffering.
Managing Risks
The xenobot’s creators have rightly acknowledged the need for discussion around the ethics of their creation. The 2018 scandal over using CRISPR (which allows the introduction of genes into an organism) may provide an instructive lesson here. While the experiment’s goal was to reduce the susceptibility of twin baby girls to HIV-AIDS, associated risks caused ethical dismay. The scientist in question is in prison.
When CRISPR became widely available, some experts called for a moratorium on heritable genome editing. Others argued the benefits outweighed the risks.
While each new technology should be considered impartially and based on its merits, giving life to xenobots raises certain significant questions:
Should xenobots have biological kill-switches in case they go rogue?
Who should decide who can access and control them?
What if “homemade” xenobots become possible? Should there be a moratorium until regulatory frameworks are established? How much regulation is required?
Lessons learned in the past from advances in other areas of science could help manage future risks, while reaping the possible benefits.
Long Road Here, Long Road Ahead
The creation of xenobots had various biological and robotic precedents. Genetic engineering has created genetically modified mice that become fluorescent in UV light.
Designer microbes can produce drugs and food ingredients that may eventually replace animal agriculture. In 2012, scientists created an artificial jellyfish called a “medusoid” from rat cells.
Robotics is also flourishing. Nanobots can monitor people’s blood sugar levels and may eventually be able to clear clogged arteries. Robots can incorporate living matter, which we witnessed when engineers and biologists created a sting-ray robot powered by light-activated cells.
In the coming years, we are sure to see more creations like xenobots that evoke both wonder and due concern. And when we do, it is important we remain both open-minded and critical.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Photo by Joel Filipe on Unsplash Continue reading →
#436186 Video Friday: Invasion of the Mini ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
There will be a Mini-Cheetah Workshop (sponsored by Naver Labs) a year from now at IROS 2020 in Las Vegas. Mini-Cheetahs for everyone!
That’s just a rendering, of course, but this isn’t:
[ MCW ]
I was like 95 percent sure that the Urban Circuit of the DARPA SubT Challenge was going to be in something very subway station-y. Oops!
In the Subterranean (SubT) Challenge, teams deploy autonomous ground and aerial systems to attempt to map, identify, and report artifacts along competition courses in underground environments. The artifacts represent items a first responder or service member may encounter in unknown underground sites. This video provides a preview of the Urban Circuit event location. The Urban Circuit is scheduled for February 18-27, 2020, at Satsop Business Park west of Olympia, Washington.
[ SubT ]
Researchers at SEAS and the Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.
To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties, that deform when an electric field is applied. By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.
Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.
[ Harvard ]
We present a system for fast and robust handovers with a robot character, together with a user study investigating the effect of robot speed and reaction time on perceived interaction quality. The system can match and exceed human speeds and confirms that users prefer human-level timing.
In a 3×3 user study, we vary the speed of the robot and add variable sensorimotor delays. We evaluate the social perception of the robot using the Robot Social Attribute Scale (RoSAS). Inclusion of a small delay, mimicking the delay of the human sensorimotor system, leads to an improvement in perceived qualities over both no delay and long delay conditions. Specifically, with no delay the robot is perceived as more discomforting and with a long delay, it is perceived as less warm.
[ Disney Research ]
When cars are autonomous, they’re not going to be able to pump themselves full of gas. Or, more likely, electrons. Kuka has the solution.
[ Kuka ]
This looks like fun, right?
[ Robocoaster ]
NASA is leading the way in the use of On-orbit Servicing, Assembly, and Manufacturing to enable large, persistent, upgradable, and maintainable spacecraft. This video was developed by the Advanced Concepts Lab (ACL) at NASA Langley Research Center.
[ NASA ]
The noisiest workshop by far at Humanoids last month (by far) was Musical Interactions With Humanoids, the end result of which was this:
[ Workshop ]
IROS is an IEEE event, and in furthering the IEEE mission to benefit humanity through technological innovation, IROS is doing a great job. But don’t take it from us – we are joined by IEEE President-Elect Professor Toshio Fukuda to find out a bit more about the impact events like IROS can have, as well as examine some of the issues around intelligent robotics and systems – from privacy to transparency of the systems at play.
[ IROS ]
Speaking of IROS, we hope you’ve been enjoying our coverage. We have already featured Harvard’s strange sea-urchin-inspired robot and a Japanese quadruped that can climb vertical ladders, with more stories to come over the next several weeks.
In the mean time, enjoy these 10 videos from the conference (as usual, we’re including the title, authors, and abstract for each—if you’d like more details about any of these projects, let us know and we’ll find out more for you).
“A Passive Closing, Tendon Driven, Adaptive Robot Hand for Ultra-Fast, Aerial Grasping and Perching,” by Andrew McLaren, Zak Fitzgerald, Geng Gao, and Minas Liarokapis from the University of Auckland, New Zealand.
Current grasping methods for aerial vehicles are slow, inaccurate and they cannot adapt to any target object. Thus, they do not allow for on-the-fly, ultra-fast grasping. In this paper, we present a passive closing, adaptive robot hand design that offers ultra-fast, aerial grasping for a wide range of everyday objects. We investigate alternative uses of structural compliance for the development of simple, adaptive robot grippers and hands and we propose an appropriate quick release mechanism that facilitates an instantaneous grasping execution. The quick release mechanism is triggered by a simple distance sensor. The proposed hand utilizes only two actuators to control multiple degrees of freedom over three fingers and it retains the superior grasping capabilities of adaptive grasping mechanisms, even under significant object pose or other environmental uncertainties. The hand achieves a grasping time of 96 ms, a maximum grasping force of 56 N and it is able to secure objects of various shapes at high speeds. The proposed hand can serve as the end-effector of grasping capable Unmanned Aerial Vehicle (UAV) platforms and it can offer perching capabilities, facilitating autonomous docking.
“Unstructured Terrain Navigation and Topographic Mapping With a Low-Cost Mobile Cuboid Robot,” by Andrew S. Morgan, Robert L. Baines, Hayley McClintock, and Brian Scassellati from Yale University, USA.
Current robotic terrain mapping techniques require expensive sensor suites to construct an environmental representation. In this work, we present a cube-shaped robot that can roll through unstructured terrain and construct a detailed topographic map of the surface that it traverses in real time with low computational and monetary expense. Our approach devolves many of the complexities of locomotion and mapping to passive mechanical features. Namely, rolling movement is achieved by sequentially inflating latex bladders that are located on four sides of the robot to destabilize and tip it. Sensing is achieved via arrays of fine plastic pins that passively conform to the geometry of underlying terrain, retracting into the cube. We developed a topography by shade algorithm to process images of the displaced pins to reconstruct terrain contours and elevation. We experimentally validated the efficacy of the proposed robot through object mapping and terrain locomotion tasks.
“Toward a Ballbot for Physically Leading People: A Human-Centered Approach,” by Zhongyu Li and Ralph Hollis from Carnegie Mellon University, USA.
This work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through congested and narrow spaces. As most previous work is robot-centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human-robot interaction module to guide the led person to a specified planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment.
“Achievement of Online Agile Manipulation Task for Aerial Transformable Multilink Robot,” by Fan Shi, Moju Zhao, Tomoki Anzai, Keita Ito, Xiangyu Chen, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.
Transformable aerial robots are favorable in aerial manipulation tasks for their flexible ability to change configuration during the flight. By assuming robot keeping in the mild motion, the previous researches sacrifice aerial agility to simplify the complex non-linear system into a single rigid body with a linear controller. In this paper, we present a framework towards agile swing motion for the transformable multi-links aerial robot. We introduce a computational-efficient non-linear model predictive controller and joints motion primitive frame-work to achieve agile transforming motions and validate with a novel robot named HYRURS-X. Finally, we implement our framework under a table tennis task to validate the online and agile performance.
“Small-Scale Compliant Dual Arm With Tail for Winged Aerial Robots,” by Alejandro Suarez, Manuel Perez, Guillermo Heredia, and Anibal Ollero from the University of Seville, Spain.
Winged aerial robots represent an evolution of aerial manipulation robots, replacing the multirotor vehicles by fixed or flapping wing platforms. The development of this morphology is motivated in terms of efficiency, endurance and safety in some inspection operations where multirotor platforms may not be suitable. This paper presents a first prototype of compliant dual arm as preliminary step towards the realization of a winged aerial robot capable of perching and manipulating with the wings folded. The dual arm provides 6 DOF (degrees of freedom) for end effector positioning in a human-like kinematic configuration, with a reach of 25 cm (half-scale w.r.t. the human arm), and 0.2 kg weight. The prototype is built with micro metal gear motors, measuring the joint angles and the deflection with small potentiometers. The paper covers the design, electronics, modeling and control of the arms. Experimental results in test-bench validate the developed prototype and its functionalities, including joint position and torque control, bimanual grasping, the dynamic equilibrium with the tail, and the generation of 3D maps with laser sensors attached at the arms.
“A Novel Small-Scale Turtle-inspired Amphibious Spherical Robot,” by Huiming Xing, Shuxiang Guo, Liwei Shi, Xihuan Hou, Yu Liu, Huikang Liu, Yao Hu, Debin Xia, and Zan Li from Beijing Institute of Technology, China.
This paper describes a novel small-scale turtle-inspired Amphibious Spherical Robot (ASRobot) to accomplish exploration tasks in the restricted environment, such as amphibious areas and narrow underwater cave. A Legged, Multi-Vectored Water-Jet Composite Propulsion Mechanism (LMVWCPM) is designed with four legs, one of which contains three connecting rod parts, one water-jet thruster and three joints driven by digital servos. Using this mechanism, the robot is able to walk like amphibious turtles on various terrains and swim flexibly in submarine environment. A simplified kinematic model is established to analyze crawling gaits. With simulation of the crawling gait, the driving torques of different joints contributed to the choice of servos and the size of links of legs. Then we also modeled the robot in water and proposed several underwater locomotion. In order to assess the performance of the proposed robot, a series of experiments were carried out in the lab pool and on flat ground using the prototype robot. Experiments results verified the effectiveness of LMVWCPM and the amphibious control approaches.
“Advanced Autonomy on a Low-Cost Educational Drone Platform,” by Luke Eller, Theo Guerin, Baichuan Huang, Garrett Warren, Sophie Yang, Josh Roy, and Stefanie Tellex from Brown University, USA.
PiDrone is a quadrotor platform created to accompany an introductory robotics course. Students build an autonomous flying robot from scratch and learn to program it through assignments and projects. Existing educational robots do not have significant autonomous capabilities, such as high-level planning and mapping. We present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and Fast-SLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python, making it ideal for use in educational settings.
“FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality,” by Winter Guerra, Ezra Tal, Varun Murali, Gilhyun Ryou and Sertac Karaman from the Massachusetts Institute of Technology, USA.
FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex dynamics are generated organically through natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest. FlightGoggles is distributed as open-source software along with the photorealistic graphics assets for several simulation environments, under the MIT license at http://flightgoggles.mit.edu.
“An Autonomous Quadrotor System for Robust High-Speed Flight Through Cluttered Environments Without GPS,” by Marc Rigter, Benjamin Morrell, Robert G. Reid, Gene B. Merewether, Theodore Tzanetos, Vinay Rajur, KC Wong, and Larry H. Matthies from University of Sydney, Australia; NASA Jet Propulsion Laboratory, California Institute of Technology, USA; and Georgia Institute of Technology, USA.
Robust autonomous flight without GPS is key to many emerging drone applications, such as delivery, search and rescue, and warehouse inspection. These and other appli- cations require accurate trajectory tracking through cluttered static environments, where GPS can be unreliable, while high- speed, agile, flight can increase efficiency. We describe the hardware and software of a quadrotor system that meets these requirements with onboard processing: a custom 300 mm wide quadrotor that uses two wide-field-of-view cameras for visual- inertial motion tracking and relocalization to a prior map. Collision-free trajectories are planned offline and tracked online with a custom tracking controller. This controller includes compensation for drag and variability in propeller performance, enabling accurate trajectory tracking, even at high speeds where aerodynamic effects are significant. We describe a system identification approach that identifies quadrotor-specific parameters via maximum likelihood estimation from flight data. Results from flight experiments are presented, which 1) validate the system identification method, 2) show that our controller with aerodynamic compensation reduces tracking error by more than 50% in both horizontal flights at up to 8.5 m/s and vertical flights at up to 3.1 m/s compared to the state-of-the-art, and 3) demonstrate our system tracking complex, aggressive, trajectories.
“Morphing Structure for Changing Hydrodynamic Characteristics of a Soft Underwater Walking Robot,” by Michael Ishida, Dylan Drotman, Benjamin Shih, Mark Hermes, Mitul Luhar, and Michael T. Tolley from the University of California, San Diego (UCSD) and University of Southern California, USA.
Existing platforms for underwater exploration and inspection are often limited to traversing open water and must expend large amounts of energy to maintain a position in flow for long periods of time. Many benthic animals overcome these limitations using legged locomotion and have different hydrodynamic profiles dictated by different body morphologies. This work presents an underwater legged robot with soft legs and a soft inflatable morphing body that can change shape to influence its hydrodynamic characteristics. Flow over the morphing body separates behind the trailing edge of the inflated shape, so whether the protrusion is at the front, center, or back of the robot influences the amount of drag and lift. When the legged robot (2.87 N underwater weight) needs to remain stationary in flow, an asymmetrically inflated body resists sliding by reducing lift on the body by 40% (from 0.52 N to 0.31 N) at the highest flow rate tested while only increasing drag by 5.5% (from 1.75 N to 1.85 N). When the legged robot needs to walk with flow, a large inflated body is pushed along by the flow, causing the robot to walk 16% faster than it would with an uninflated body. The body shape significantly affects the ability of the robot to walk against flow as it is able to walk against 0.09 m/s flow with the uninflated body, but is pushed backwards with a large inflated body. We demonstrate that the robot can detect changes in flow velocity with a commercial force sensor and respond by morphing into a hydrodynamically preferable shape. Continue reading →
#435816 This Light-based Nervous System Helps ...
Last night, way past midnight, I stumbled onto my porch blindly grasping for my keys after a hellish day of international travel. Lights were low, I was half-asleep, yet my hand grabbed the keychain, found the lock, and opened the door.
If you’re rolling your eyes—yeah, it’s not exactly an epic feat for a human. Thanks to the intricate wiring between our brain and millions of sensors dotted on—and inside—our skin, we know exactly where our hand is in space and what it’s touching without needing visual confirmation. But this combined sense of the internal and the external is completely lost to robots, which generally rely on computer vision or surface mechanosensors to track their movements and their interaction with the outside world. It’s not always a winning strategy.
What if, instead, we could give robots an artificial nervous system?
This month, a team led by Dr. Rob Shepard at Cornell University did just that, with a seriously clever twist. Rather than mimicking the electric signals in our nervous system, his team turned to light. By embedding optical fibers inside a 3D printed stretchable material, the team engineered an “optical lace” that can detect changes in pressure less than a fraction of a pound, and pinpoint the location to a spot half the width of a tiny needle.
The invention isn’t just an artificial skin. Instead, the delicate fibers can be distributed both inside a robot and on its surface, giving it both a sense of tactile touch and—most importantly—an idea of its own body position in space. Optical lace isn’t a superficial coating of mechanical sensors; it’s an entire platform that may finally endow robots with nerve-like networks throughout the body.
Eventually, engineers hope to use this fleshy, washable material to coat the sharp, cold metal interior of current robots, transforming C-3PO more into the human-like hosts of Westworld. Robots with a “bodily” sense could act as better caretakers for the elderly, said Shepard, because they can assist fragile people without inadvertently bruising or otherwise harming them. The results were published in Science Robotics.
An Unconventional Marriage
The optical lace is especially creative because it marries two contrasting ideas: one biological-inspired, the other wholly alien.
The overarching idea for optical lace is based on the animal kingdom. Through sight, hearing, smell, taste, touch, and other senses, we’re able to interpret the outside world—something scientists call exteroception. Thanks to our nervous system, we perform these computations subconsciously, allowing us to constantly “perceive” what’s going on around us.
Our other perception is purely internal. Proprioception (sorry, it’s not called “inception” though it should be) is how we know where our body parts are in space without having to look at them, which lets us perform complex tasks when blind. Although less intuitive than exteroception, proprioception also relies on stretching and other deformations within the muscles and tendons and receptors under the skin, which generate electrical currents that shoot up into the brain for further interpretation.
In other words, in theory it’s possible to recreate both perceptions with a single information-carrying system.
Here’s where the alien factor comes in. Rather than using electrical properties, the team turned to light as their data carrier. They had good reason. “Compared with electricity, light carries information faster and with higher data densities,” the team explained. Light can also transmit in multiple directions simultaneously, and is less susceptible to electromagnetic interference. Although optical nervous systems don’t exist in the biological world, the team decided to improve on Mother Nature and give it a shot.
Optical Lace
The construction starts with engineering a “sheath” for the optical nerve fibers. The team first used an elastic polyurethane—a synthetic material used in foam cushioning, for example—to make a lattice structure filled with large pores, somewhat like a lattice pie crust. Thanks to rapid, high-resolution 3D printing, the scaffold can have different stiffness from top to bottom. To increase sensitivity to the outside world, the team made the top of the lattice soft and pliable, to better transfer force to mechanical sensors. In contrast, the “deeper” regions held their structure better, and kept their structure under pressure.
Now the fun part. The team next threaded stretchable “light guides” into the scaffold. These fibers transmit photons, and are illuminated with a blue LED light. One, the input light guide, ran horizontally across the soft top part of the scaffold. Others ran perpendicular to the input in a “U” shape, going from more surface regions to deeper ones. These are the output guides. The architecture loosely resembles the wiring in our skin and flesh.
Normally, the output guides are separated from the input by a small air gap. When pressed down, the input light fiber distorts slightly, and if the pressure is high enough, it contacts one of the output guides. This causes light from the input fiber to “leak” to the output one, so that it lights up—the stronger the pressure, the brighter the output.
“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” said study author Patricia Xu. “The intensity of this determines the intensity of the deformation itself.”
Double Perception
As a proof-of-concept for proprioception, the team made a cylindrical lace with one input and 12 output channels. They varied the stiffness of the scaffold along the cylinder, and by pressing down at different points, were able to calculate how much each part stretched and deformed—a prominent precursor to knowing where different regions of the structure are moving in space. It’s a very rudimentary sort of proprioception, but one that will become more sophisticated with increasing numbers of strategically-placed mechanosensors.
The test for exteroception was a whole lot stranger. Here, the team engineered another optical lace with 15 output channels and turned it into a squishy piano. When pressed down, an Arduino microcontroller translated light output signals into sound based on the position of each touch. The stronger the pressure, the louder the volume. While not a musical masterpiece, the demo proved their point: the optical lace faithfully reported the strength and location of each touch.
A More Efficient Robot
Although remarkably novel, the optical lace isn’t yet ready for prime time. One problem is scalability: because of light loss, the material is limited to a certain size. However, rather than coating an entire robot, it may help to add optical lace to body parts where perception is critical—for example, fingertips and hands.
The team sees plenty of potential to keep developing the artificial flesh. Depending on particular needs, both the light guides and scaffold can be modified for sensitivity, spatial resolution, and accuracy. Multiple optical fibers that measure for different aspects—pressure, pain, temperature—can potentially be embedded in the same region, giving robots a multitude of senses.
In this way, we hope to reduce the number of electronics and combine signals from multiple sensors without losing information, the authors said. By taking inspiration from biological networks, it may even be possible to use various inputs through an optical lace to control how the robot behaves, closing the loop from sensation to action.
Image Credit: Cornell Organic Robotics Lab. A flexible, porous lattice structure is threaded with stretchable optical fibers containing more than a dozen mechanosensors and attached to an LED light. When the lattice structure is pressed, the sensors pinpoint changes in the photon flow. Continue reading →
#435023 Inflatable Robot Astronauts and How to ...
The typical cultural image of a robot—as a steel, chrome, humanoid bucket of bolts—is often far from the reality of cutting-edge robotics research. There are difficulties, both social and technological, in realizing the image of a robot from science fiction—let alone one that can actually help around the house. Often, it’s simply the case that great expense in producing a humanoid robot that can perform dozens of tasks quite badly is less appropriate than producing some other design that’s optimized to a specific situation.
A team of scientists from Brigham Young University has received funding from NASA to investigate an inflatable robot called, improbably, King Louie. The robot was developed by Pneubotics, who have a long track record in the world of soft robotics.
In space, weight is at a premium. The world watched in awe and amusement when Commander Chris Hadfield sang “Space Oddity” from the International Space Station—but launching that guitar into space likely cost around $100,000. A good price for launching payload into outer space is on the order of $10,000 per pound ($22,000/kg).
For that price, it would cost a cool $1.7 million to launch Boston Dynamics’ famous ATLAS robot to the International Space Station, and its bulk would be inconvenient in the cramped living quarters available. By contrast, an inflatable robot like King Louie is substantially lighter and can simply be deflated and folded away when not in use. The robot can be manufactured from cheap, lightweight, and flexible materials, and minor damage is easy to repair.
Inflatable Robots Under Pressure
The concept of inflatable robots is not new: indeed, earlier prototypes of King Louie were exhibited back in 2013 at Google I/O’s After Hours, flailing away at each other in a boxing ring. Sparks might fly in fights between traditional robots, but the aim here was to demonstrate that the robots are passively safe: the soft, inflatable figures won’t accidentally smash delicate items when moving around.
Health and safety regulations form part of the reason why robots don’t work alongside humans more often, but soft robots would be far safer to use in healthcare or around children (whose first instinct, according to BYU’s promotional video, is either to hug or punch King Louie.) It’s also much harder to have nightmarish fantasies about robotic domination with these friendlier softbots: Terminator would’ve been a much shorter franchise if Skynet’s droids were inflatable.
Robotic exoskeletons are increasingly used for physical rehabilitation therapies, as well as for industrial purposes. As countries like Japan seek to care for their aging populations with robots and alleviate the burden on nurses, who suffer from some of the highest rates of back injuries of any profession, soft robots will become increasingly attractive for use in healthcare.
Precision and Proprioception
The main issue is one of control. Rigid, metallic robots may be more expensive and more dangerous, but the simple fact of their rigidity makes it easier to map out and control the precise motions of each of the robot’s limbs, digits, and actuators. Individual motors attached to these rigid robots can allow for a great many degrees of freedom—individual directions in which parts of the robot can move—and precision control.
For example, ATLAS has 28 degrees of freedom, while Shadow’s dexterous robot hand alone has 20. This is much harder to do with an inflatable robot, for precisely the same reasons that make it safer. Without hard and rigid bones, other methods of control must be used.
In the case of King Louie, the robot is made up of many expandable air chambers. An air-compressor changes the pressure levels in these air chambers, allowing them to expand and contract. This harks back to some of the earliest pneumatic automata. Pairs of chambers act antagonistically, like muscles, such that when one chamber “tenses,” another relaxes—allowing King Louie to have, for example, four degrees of freedom in each of its arms.
The robot is also surprisingly strong. Professor Killpack, who works at BYU on the project, estimates that its payload is comparable to other humanoid robots on the market, like Rethink Robotics’ Baxter (RIP).
Proprioception, that sixth sense that allows us to map out and control our own bodies and muscles in fine detail, is being enhanced for a wider range of soft, flexible robots with the use of machine learning algorithms connected to input from a whole host of sensors on the robot’s body.
Part of the reason this is so complicated with soft, flexible robots is that the shape and “map” of the robot’s body can change; that’s the whole point. But this means that every time King Louie is inflated, its body is a slightly different shape; when it becomes deformed, for example due to picking up objects, the shape changes again, and the complex ways in which the fabric can twist and bend are far more difficult to model and sense than the behavior of the rigid metal of King Louie’s hard counterparts. When you’re looking for precision, seemingly-small changes can be the difference between successfully holding an object or dropping it.
Learning to Move
Researchers at BYU are therefore spending a great deal of time on how to control the soft-bot enough to make it comparably useful. One method involves the commercial tracking technology used in the Vive VR system: by moving the game controller, which provides a constant feedback to the robot’s arm, you can control its position. Since the tracking software provides an estimate of the robot’s joint angles and continues to provide feedback until the arm is correctly aligned, this type of feedback method is likely to work regardless of small changes to the robot’s shape.
The other technologies the researchers are looking into for their softbot include arrays of flexible, tactile sensors to place on the softbot’s skin, and minimizing the complex cross-talk between these arrays to get coherent information about the robot’s environment. As with some of the new proprioception research, the project is looking into neural networks as a means of modeling the complicated dynamics—the motion and response to forces—of the softbot. This method relies on large amounts of observational data, mapping how the robot is inflated and how it moves, rather than explicitly understanding and solving the equations that govern its motion—which hopefully means the methods can work even as the robot changes.
There’s still a long way to go before soft and inflatable robots can be controlled sufficiently well to perform all the tasks they might be used for. Ultimately, no one robotic design is likely to be perfect for any situation.
Nevertheless, research like this gives us hope that one day, inflatable robots could be useful tools, or even companions, at which point the advertising slogans write themselves: Don’t let them down, and they won’t let you down!
Image Credit: Brigham Young University. Continue reading →
#434837 In Defense of Black Box AI
Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?
In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.
Edd Gent: What’s your experience with black box algorithms?
Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.
I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.
Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.
EG: What made you feel like you had to mount a defense of these black box algorithms?
EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.
It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.
It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.
EG: In what situations do you think we should be using black box algorithms?
EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.
There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.
But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.
The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.
What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.
EG: Do you think there’s been too much emphasis on interpretability?
EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.
I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.
Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.
EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?
EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.
With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.
So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.
*Responses have been edited for length and style
Image Credit: Chingraph / Shutterstock.com Continue reading →