Tag Archives: before
#433634 This Robotic Skin Makes Inanimate ...
In Goethe’s poem “The Sorcerer’s Apprentice,” made world-famous by its adaptation in Disney’s Fantasia, a lazy apprentice, left to fetch water, uses magic to bewitch a broom into performing his chores for him. Now, new research from Yale has opened up the possibility of being able to animate—and automate—household objects by fitting them with a robotic skin.
Yale’s Soft Robotics lab, the Faboratory, is led by Professor Rebecca Kramer-Bottiglio, and has long investigated the possibilities associated with new kinds of manufacturing. While the typical image of a robot is hard, cold steel and rigid movements, soft robotics aims to create something more flexible and versatile. After all, the human body is made up of soft, flexible surfaces, and the world is designed for us. Soft, deformable robots could change shape to adapt to different tasks.
When designing a robot, key components are the robot’s sensors, which allow it to perceive its environment, and its actuators, the electrical or pneumatic motors that allow the robot to move and interact with its environment.
Consider your hand, which has temperature and pressure sensors, but also muscles as actuators. The omni-skins, as the Science Robotics paper dubs them, combine sensors and actuators, embedding them into an elastic sheet. The robotic skins are moved by pneumatic actuators or memory alloy that can bounce back into shape. If this is then wrapped around a soft, deformable object, moving the skin with the actuators can allow the object to crawl along a surface.
The key to the design here is flexibility: rather than adding chips, sensors, and motors into every household object to turn them into individual automatons, the same skin can be used for many purposes. “We can take the skins and wrap them around one object to perform a task—locomotion, for example—and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” said Kramer-Bottiglio. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”
The task is then to dream up applications for the omni-skins. Initially, you might imagine demanding a stuffed toy to fetch the remote control for you, or animating a sponge to wipe down kitchen surfaces—but this is just the beginning. The scientists attached the skins to a soft tube and camera, creating a worm-like robot that could compress itself and crawl into small spaces for rescue missions. The same skins could then be worn by a person to sense their posture. One could easily imagine this being adapted into a soft exoskeleton for medical or industrial purposes: for example, helping with rehabilitation after an accident or injury.
The initial motivating factor for creating the robots was in an environment where space and weight are at a premium, and humans are forced to improvise with whatever’s at hand: outer space. Kramer-Bottoglio originally began the work after NASA called out for soft robotics systems for use by astronauts. Instead of wasting valuable rocket payload by sending up a heavy metal droid like ATLAS to fetch items or perform repairs, soft robotic skins with modular sensors could be adapted for a range of different uses spontaneously.
By reassembling components in the soft robotic skin, a crumpled ball of paper could provide the chassis for a robot that performs repairs on the spaceship, or explores the lunar surface. The dynamic compression provided by the robotic skin could be used for g-suits to protect astronauts when they rapidly accelerate or decelerate.
“One of the main things I considered was the importance of multi-functionality, especially for deep space exploration where the environment is unpredictable. The question is: How do you prepare for the unknown unknowns? … Given the design-on-the-fly nature of this approach, it’s unlikely that a robot created using robotic skins will perform any one task optimally,” Kramer-Bottiglio said. “However, the goal is not optimization, but rather diversity of applications.”
There are still problems to resolve. Many of the videos of the skins indicate that they can rely on an external power supply. Creating new, smaller batteries that can power wearable devices has been a focus of cutting-edge materials science research for some time. Much of the lab’s expertise is in creating flexible, stretchable electronics that can be deformed by the actuators without breaking the circuitry. In the future, the team hopes to work on streamlining the production process; if the components could be 3D printed, then the skins could be created when needed.
In addition, robotic hardware that’s capable of performing an impressive range of precise motions is quite an advanced technology. The software to control those robots, and enable them to perform a variety of tasks, is quite another challenge. With soft robots, it can become even more complex to design that control software, because the body itself can change shape and deform as the robot moves. The same set of programmed motions, then, can produce different results depending on the environment.
“Let’s say I have a soft robot with four legs that crawls along the ground, and I make it walk up a hard slope,” Dr. David Howard, who works on robotics at CSIRO in Australia, explained to ABC.
“If I make that slope out of gravel and I give it the same control commands, the actual body is going to deform in a different way, and I’m not necessarily going to know what that is.”
Despite these and other challenges, research like that at the Faboratory still hopes to redefine how we think of robots and robotics. Instead of a robot that imitates a human and manipulates objects, the objects themselves will become programmable matter, capable of moving autonomously and carrying out a range of tasks. Futurists speculate about a world where most objects are automated to some degree and can assemble and repair themselves, or are even built entirely of tiny robots.
The tale of the Sorcerer’s Apprentice was first written in 1797, at the dawn of the industrial revolution, over a century before the word “robot” was even coined. Yet more and more roboticists aim to prove Arthur C Clarke’s maxim: any sufficiently advanced technology is indistinguishable from magic.
Image Credit: Joran Booth, The Faboratory Continue reading
#433506 MIT’s New Robot Taught Itself to Pick ...
Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.
This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.
The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.
This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.
Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.
Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.
Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.
But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.
This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.
The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.
This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.
Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.
As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.
This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.
Image Credit: Tom Buehler/CSAIL Continue reading
#433474 How to Feed Global Demand for ...
“You really can’t justify tuna in Chicago as a source of sustenance.” That’s according to Dr. Sylvia Earle, a National Geographic Society Explorer who was the first female chief scientist at NOAA. She came to the Good Food Institute’s Good Food Conference to deliver a call to action around global food security, agriculture, environmental protection, and the future of consumer choice.
It seems like all options should be on the table to feed an exploding population threatened by climate change. But Dr. Earle, who is faculty at Singularity University, drew a sharp distinction between seafood for sustenance versus seafood as a choice. “There is this widespread claim that we must take large numbers of wildlife from the sea in order to have food security.”
A few minutes later, Dr. Earle directly addressed those of us in the audience. “We know the value of a dead fish,” she said. That’s market price. “But what is the value of a live fish in the ocean?”
That’s when my mind blew open. What is the value—or put another way, the cost—of using the ocean as a major source of protein for humans? How do you put a number on that? Are we talking about dollars and cents, or about something far larger?
Dr. Liz Specht of the Good Food Institute drew the audience’s attention to a strange imbalance. Currently, about half of the yearly global catch of seafood comes from aquaculture. That means that the other half is wild caught. It’s hard to imagine half of your meat coming directly from the forests and the plains, isn’t it? And yet half of the world’s seafood comes from direct harvesting of the oceans, by way of massive overfishing, a terrible toll from bycatch, a widespread lack of regulation and enforcement, and even human rights violations such as slavery.
The search for solutions is on, from both within the fishing industry and from external agencies such as governments and philanthropists. Could there be another way?
Makers of plant-based seafood and clean seafood think they know how to feed the global demand for seafood without harming the ocean. These companies are part of a larger movement harnessing technology to reduce our reliance on wild and domesticated animals—and all the environmental, economic, and ethical issues that come with it.
Producers of plant-based seafood (20 or so currently) are working to capture the taste, texture, and nutrition of conventional seafood without the limitations of geography or the health of a local marine population. Like with plant-based meat, makers of plant-based seafood are harnessing food science and advances in chemistry, biology, and engineering to make great food. The industry’s strategy? Start with what the consumer wants, and then figure out how to achieve that great taste through technology.
So how does plant-based seafood taste? Pretty good, as it turns out. (The biggest benefit of a food-oriented conference is that your mouth is always full!)
I sampled “tuna” salad made from Good Catch Food’s fish-free tuna, which is sourced from legumes; the texture was nearly indistinguishable from that of flaked albacore tuna, and there was no lingering fishy taste to overpower my next bite. In a blind taste test, I probably wouldn’t have known that I was eating a plant-based seafood alternative. Next I reached for Ocean Hugger Food’s Ahimi, a tomato-based alternative to raw tuna. I adore Hawaiian poke, so I was pleasantly surprised when my Ahimi-based poke captured the bite of ahi tuna. It wasn’t quite as delightfully fatty as raw tuna, but with wild tuna populations struggling to recover from a 97% decline in numbers from 40 years ago, Ahimi is a giant stride in the right direction.
These plant-based alternatives aren’t the only game in town, however.
The clean meat industry, which has also been called “cultured meat” or “cellular agriculture,” isn’t seeking to lure consumers away from animal protein. Instead, cells are sampled from live animals and grown in bioreactors—meaning that no animal is slaughtered to produce real meat.
Clean seafood is poised to piggyback off platforms developed for clean meat; growing fish cells in the lab should rely on the same processes as growing meat cells. I know of four companies currently focusing on seafood (Finless Foods, Wild Type, BlueNalu, and Seafuture Sustainable Biotech), and a few more are likely to emerge from stealth mode soon.
Importantly, there’s likely not much difference between growing clean seafood from the top or the bottom of the food chain. Tuna, for example, are top predators that must grow for at least 10 years before they’re suitable as food. Each year, a tuna consumes thousands of pounds of other fish, shellfish, and plankton. That “long tail of groceries,” said Dr. Earle, “is a pretty expensive choice.” Excitingly, clean tuna would “level the trophic playing field,” as Dr. Specht pointed out.
All this is only the beginning of what might be possible.
Combining synthetic biology with clean meat and seafood means that future products could be personalized for individual taste preferences or health needs, by reprogramming the DNA of the cells in the lab. Industries such as bioremediation and biofuels likely have a lot to teach us about sourcing new ingredients and flavors from algae and marine plants. By harnessing rapid advances in automation, robotics, sensors, machine vision, and other big-data analytics, the manufacturing and supply chains for clean seafood could be remarkably safe and robust. Clean seafood would be just that: clean, without pathogens, parasites, or the plastic threatening to fill our oceans, meaning that you could enjoy it raw.
What about price? Dr. Mark Post, a pioneer in clean meat who is also faculty at Singularity University, estimated that 80% of clean-meat production costs come from the expensive medium in which cells are grown—and some ingredients in the medium are themselves sourced from animals, which misses the point of clean meat. Plus, to grow a whole cut of food, like a fish fillet, the cells need to be coaxed into a complex 3D structure with various cell types like muscle cells and fat cells. These two technical challenges must be solved before clean meat and seafood give consumers the experience they want, at the price they want.
In this respect clean seafood has an unusual edge. Most of what we know about growing animal cells in the lab comes from the research and biomedical industries (from tissue engineering, for example)—but growing cells to replace an organ has different constraints than growing cells for food. The link between clean seafood and biomedicine is less direct, empowering innovators to throw out dogma and find novel reagents, protocols, and equipment to grow seafood that captures the tastes, textures, smells, and overall experience of dining by the ocean.
Asked to predict when we’ll be seeing clean seafood in the grocery store, Lou Cooperhouse the CEO of BlueNalu, explained that the challenges aren’t only in the lab: marketing, sales, distribution, and communication with consumers are all critical. As Niya Gupta, the founder of Fork & Goode, said, “The question isn’t ‘can we do it’, but ‘can we sell it’?”
The good news is that the clean meat and seafood industry is highly collaborative; there are at least two dozen companies in the space, and they’re all talking to each other. “This is an ecosystem,” said Dr. Uma Valeti, the co-founder of Memphis Meats. “We’re not competing with each other.” It will likely be at least a decade before science, business, and regulation enable clean meat and seafood to routinely appear on restaurant menus, let alone market shelves.
Until then, think carefully about your food choices. Meditate on Dr. Earle’s question: “What is the real cost of that piece of halibut?” Or chew on this from Dr. Ricardo San Martin, of the Sutardja Center at the University of California, Berkeley: “Food is a system of meanings, not an object.” What are you saying when you choose your food, about your priorities and your values and how you want the future to look? Do you think about animal welfare? Most ethical regulations don’t extend to marine life, and if you don’t think that ocean creatures feel pain, consider the lobster.
Seafood is largely an acquired taste, since most of us don’t live near the water. Imagine a future in which children grow up loving the taste of delicious seafood but without hurting a living animal, the ocean, or the global environment.
Do more than imagine. As Dr. Earle urged us, “Convince the public at large that this is a really cool idea.”
Widely available
Medium availability
Emerging
Gardein
Ahimi (Ocean Hugger)
New Wave Foods
Sophie’s Kitchen
Cedar Lake
To-funa Fish
Quorn
SoFine Foods
Seamore
Vegetarian Plus
Akua
Good Catch
Heritage
Hungry Planet
Odontella
Loma Linda
Heritage Health Food
Terramino Foods
The Vegetarian Butcher
May Wah
VBites
Table based on Figure 5 of the report “An Ocean of Opportunity: Plant-based and clean seafood for sustainable oceans without sacrifice,” from The Good Food Institute.
Image Credit: Tono Balaguer / Shutterstock.com Continue reading