Tag Archives: science

#433655 First-Ever Grad Program in Space Mining ...

Maybe they could call it the School of Space Rock: A new program being offered at the Colorado School of Mines (CSM) will educate post-graduate students on the nuts and bolts of extracting and using valuable materials such as rare metals and frozen water from space rocks like asteroids or the moon.

Officially called Space Resources, the graduate-level program is reputedly the first of its kind in the world to offer a course in the emerging field of space mining. Heading the program is Angel Abbud-Madrid, director of the Center for Space Resources at Mines, a well-known engineering school located in Golden, Colorado, where Molson Coors taps Rocky Mountain spring water for its earthly brews.

The first semester for the new discipline began last month. While Abbud-Madrid didn’t immediately respond to an interview request, Singularity Hub did talk to Chris Lewicki, president and CEO of Planetary Resources, a space mining company whose founders include Peter Diamandis, Singularity University co-founder.

A former NASA engineer who worked on multiple Mars missions, Lewicki says the Space Resources program at CSM, with its multidisciplinary focus on science, economics, and policy, will help students be light years ahead of their peers in the nascent field of space mining.

“I think it’s very significant that they’ve started this program,” he said. “Having students with that kind of background exposure just allows them to be productive on day one instead of having to kind of fill in a lot of things for them.”

Who would be attracted to apply for such a program? There are many professionals who could be served by a post-baccalaureate certificate, master’s degree, or even Ph.D. in Space Resources, according to Lewicki. Certainly aerospace engineers and planetary scientists would be among the faces in the classroom.

“I think it’s [also] people who have an interest in what I would call maybe space robotics,” he said. Lewicki is referring not only to the classic example of robotic arms like the Canadarm2, which lends a hand to astronauts aboard the International Space Station, but other types of autonomous platforms.

One example might be Planetary Resources’ own Arkyd-6, a small, autonomous satellite called a CubeSat launched earlier this year to test different technologies that might be used for deep-space exploration of resources. The proof-of-concept was as much a test for the technology—such as the first space-based use of a mid-wave infrared imager to detect water resources—as it was for being able to work in space on a shoestring budget.

“We really proved that doing one of these billion-dollar science missions to deep space can be done for a lot less if you have a very focused goal, and if you kind of cut a lot of corners and then put some commercial approaches into those things,” Lewicki said.

A Trillion-Dollar Industry
Why space mining? There are at least a trillion reasons.

Astrophysicist Neil deGrasse Tyson famously said that the first trillionaire will be the “person who exploits the natural resources on asteroids.” That’s because asteroids—rocky remnants from the formation of our solar system more than four billion years ago—harbor precious metals, ranging from platinum and gold to iron and nickel.

For instance, one future target of exploration by NASA—an asteroid dubbed 16 Psyche, orbiting the sun in the asteroid belt between Mars and Jupiter—is worth an estimated $10,000 quadrillion. It’s a number so mind-bogglingly big that it would crash the global economy, if someone ever figured out how to tow it back to Earth without literally crashing it into the planet.

Living Off the Land
Space mining isn’t just about getting rich. Many argue that humanity’s ability to extract resources in space, especially water that can be refined into rocket fuel, will be a key technology to extend our reach beyond near-Earth space.

The presence of frozen water around the frigid polar regions of the moon, for example, represents an invaluable source to power future deep-space missions. Splitting H20 into its component elements of hydrogen and oxygen would provide a nearly inexhaustible source of rocket fuel. Today, it costs $10,000 to put a pound of payload in Earth orbit, according to NASA.

Until more advanced rocket technology is developed, the moon looks to be the best bet for serving as the launching pad to Mars and beyond.

Moon Versus Asteroid
However, Lewicki notes that despite the moon’s proximity and our more intimate familiarity with its pockmarked surface, that doesn’t mean a lunar mission to extract resources is any easier than a multi-year journey to a fast-moving asteroid.

For one thing, fighting gravity to and from the moon is no easy feat, as the moon has a significantly stronger gravitational field than an asteroid. Another challenge is that the frozen water is located in permanently shadowed lunar craters, meaning space miners can’t rely on solar-powered equipment, but on some sort of external energy source.

And then there’s the fact that moon craters might just be the coldest places in the solar system. NASA’s Lunar Reconnaissance Orbiter found temperatures plummeted as low as 26 Kelvin, or more than minus 400 degrees Fahrenheit. In comparison, the coldest temperatures on Earth have been recorded near the South Pole in Antarctica—about minus 148 degrees F.

“We don’t operate machines in that kind of thermal environment,” Lewicki said of the extreme temperatures detected in the permanent dark regions of the moon. “Antarctica would be a balmy desert island compared to a lunar polar crater.”

Of course, no one knows quite what awaits us in the asteroid belt. Answers may soon be forthcoming. Last week, the Japan Aerospace Exploration Agency landed two small, hopping rovers on an asteroid called Ryugu. Meanwhile, NASA hopes to retrieve a sample from the near-Earth asteroid Bennu when its OSIRIS-REx mission makes contact at the end of this year.

No Bucks, No Buck Rogers
Visionaries like Elon Musk and Jeff Bezos talk about colonies on Mars, with millions of people living and working in space. The reality is that there’s probably a reason Buck Rogers was set in the 25th century: It’s going to take a lot of money and a lot of time to realize those sci-fi visions.

Or, as Lewicki put it: “No bucks, no Buck Rogers.”

The cost of operating in outer space can be prohibitive. Planetary Resources itself is grappling with raising additional funding, with reports this year about layoffs and even a possible auction of company assets.

Still, Lewicki is confident that despite economic and technical challenges, humanity will someday exceed even the boldest dreamers—skyscrapers on the moon, interplanetary trips to Mars—as judged against today’s engineering marvels.

“What we’re doing is going to be very hard, very painful, and almost certainly worth it,” he said. “Who would have thought that there would be a job for a space miner that you could go to school for, even just five or ten years ago. Things move quickly.”

Image Credit: M-SUR / Shutterstock.com Continue reading

Posted in Human Robots

#433634 This Robotic Skin Makes Inanimate ...

In Goethe’s poem “The Sorcerer’s Apprentice,” made world-famous by its adaptation in Disney’s Fantasia, a lazy apprentice, left to fetch water, uses magic to bewitch a broom into performing his chores for him. Now, new research from Yale has opened up the possibility of being able to animate—and automate—household objects by fitting them with a robotic skin.

Yale’s Soft Robotics lab, the Faboratory, is led by Professor Rebecca Kramer-Bottiglio, and has long investigated the possibilities associated with new kinds of manufacturing. While the typical image of a robot is hard, cold steel and rigid movements, soft robotics aims to create something more flexible and versatile. After all, the human body is made up of soft, flexible surfaces, and the world is designed for us. Soft, deformable robots could change shape to adapt to different tasks.

When designing a robot, key components are the robot’s sensors, which allow it to perceive its environment, and its actuators, the electrical or pneumatic motors that allow the robot to move and interact with its environment.

Consider your hand, which has temperature and pressure sensors, but also muscles as actuators. The omni-skins, as the Science Robotics paper dubs them, combine sensors and actuators, embedding them into an elastic sheet. The robotic skins are moved by pneumatic actuators or memory alloy that can bounce back into shape. If this is then wrapped around a soft, deformable object, moving the skin with the actuators can allow the object to crawl along a surface.

The key to the design here is flexibility: rather than adding chips, sensors, and motors into every household object to turn them into individual automatons, the same skin can be used for many purposes. “We can take the skins and wrap them around one object to perform a task—locomotion, for example—and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” said Kramer-Bottiglio. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”

The task is then to dream up applications for the omni-skins. Initially, you might imagine demanding a stuffed toy to fetch the remote control for you, or animating a sponge to wipe down kitchen surfaces—but this is just the beginning. The scientists attached the skins to a soft tube and camera, creating a worm-like robot that could compress itself and crawl into small spaces for rescue missions. The same skins could then be worn by a person to sense their posture. One could easily imagine this being adapted into a soft exoskeleton for medical or industrial purposes: for example, helping with rehabilitation after an accident or injury.

The initial motivating factor for creating the robots was in an environment where space and weight are at a premium, and humans are forced to improvise with whatever’s at hand: outer space. Kramer-Bottoglio originally began the work after NASA called out for soft robotics systems for use by astronauts. Instead of wasting valuable rocket payload by sending up a heavy metal droid like ATLAS to fetch items or perform repairs, soft robotic skins with modular sensors could be adapted for a range of different uses spontaneously.

By reassembling components in the soft robotic skin, a crumpled ball of paper could provide the chassis for a robot that performs repairs on the spaceship, or explores the lunar surface. The dynamic compression provided by the robotic skin could be used for g-suits to protect astronauts when they rapidly accelerate or decelerate.

“One of the main things I considered was the importance of multi-functionality, especially for deep space exploration where the environment is unpredictable. The question is: How do you prepare for the unknown unknowns? … Given the design-on-the-fly nature of this approach, it’s unlikely that a robot created using robotic skins will perform any one task optimally,” Kramer-Bottiglio said. “However, the goal is not optimization, but rather diversity of applications.”

There are still problems to resolve. Many of the videos of the skins indicate that they can rely on an external power supply. Creating new, smaller batteries that can power wearable devices has been a focus of cutting-edge materials science research for some time. Much of the lab’s expertise is in creating flexible, stretchable electronics that can be deformed by the actuators without breaking the circuitry. In the future, the team hopes to work on streamlining the production process; if the components could be 3D printed, then the skins could be created when needed.

In addition, robotic hardware that’s capable of performing an impressive range of precise motions is quite an advanced technology. The software to control those robots, and enable them to perform a variety of tasks, is quite another challenge. With soft robots, it can become even more complex to design that control software, because the body itself can change shape and deform as the robot moves. The same set of programmed motions, then, can produce different results depending on the environment.

“Let’s say I have a soft robot with four legs that crawls along the ground, and I make it walk up a hard slope,” Dr. David Howard, who works on robotics at CSIRO in Australia, explained to ABC.

“If I make that slope out of gravel and I give it the same control commands, the actual body is going to deform in a different way, and I’m not necessarily going to know what that is.”

Despite these and other challenges, research like that at the Faboratory still hopes to redefine how we think of robots and robotics. Instead of a robot that imitates a human and manipulates objects, the objects themselves will become programmable matter, capable of moving autonomously and carrying out a range of tasks. Futurists speculate about a world where most objects are automated to some degree and can assemble and repair themselves, or are even built entirely of tiny robots.

The tale of the Sorcerer’s Apprentice was first written in 1797, at the dawn of the industrial revolution, over a century before the word “robot” was even coined. Yet more and more roboticists aim to prove Arthur C Clarke’s maxim: any sufficiently advanced technology is indistinguishable from magic.

Image Credit: Joran Booth, The Faboratory Continue reading

Posted in Human Robots

#433620 Instilling the Best of Human Values in ...

Now that the era of artificial intelligence is unquestionably upon us, it behooves us to think and work harder to ensure that the AIs we create embody positive human values.

Science fiction is full of AIs that manifest the dark side of humanity, or are indifferent to humans altogether. Such possibilities cannot be ruled out, but nor is there any logical or empirical reason to consider them highly likely. I am among a large group of AI experts who see a strong potential for profoundly positive outcomes in the AI revolution currently underway.

We are facing a future with great uncertainty and tremendous promise, and the best we can do is to confront it with a combination of heart and mind, of common sense and rigorous science. In the realm of AI, what this means is, we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect.

The quest for beneficial AI has many dimensions, including its potential to reduce material scarcity and to help unlock the human capacity for love and compassion.

Reducing Scarcity
A large percentage of difficult issues in human society, many of which spill over into the AI domain, would be palliated significantly if material scarcity became less of a problem. Fortunately, AI has great potential to help here. AI is already increasing efficiency in nearly every industry.

In the next few decades, as nanotech and 3D printing continue to advance, AI-driven design will become a larger factor in the economy. Radical new tools like artificial enzymes built using Christian Schafmeister’s spiroligomer molecules, and designed using quantum physics-savvy AIs, will enable the creation of new materials and medicines.

For amazing advances like the intersection of AI and nanotech to lead toward broadly positive outcomes, however, the economic and political aspects of the AI industry may have to shift from the current status quo.

Currently, most AI development occurs under the aegis of military organizations or large corporations oriented heavily toward advertising and marketing. Put crudely, an awful lot of AI today is about “spying, brainwashing, or killing.” This is not really the ideal situation if we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial.

Also, as the bulk of AI development now occurs in large for-profit organizations bound by law to pursue the maximization of shareholder value, we face a situation where AI tends to exacerbate global wealth inequality and class divisions. This has the potential to lead to various civilization-scale failure modes involving the intersection of geopolitics, AI, cyberterrorism, and so forth. Part of my motivation for founding the decentralized AI project SingularityNET was to create an alternative mode of dissemination and utilization of both narrow AI and AGI—one that operates in a self-organizing way, outside of the direct grip of conventional corporate and governmental structures.

In the end, though, I worry that radical material abundance and novel political and economic structures may fail to create a positive future, unless they are coupled with advances in consciousness and compassion. AGIs have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness—and can effectively pass these values on.

Transmitting Human Values
Brain-computer interfacing is another critical aspect of the quest for creating more positive AIs and more positive humans. As Elon Musk has put it, “If you can’t beat ’em, join’ em.” Joining is more fun than beating anyway. What better way to infuse AIs with human values than to connect them directly to human brains, and let them learn directly from the source (while providing humans with valuable enhancements)?

Millions of people recently heard Elon Musk discuss AI and BCI on the Joe Rogan podcast. Musk’s embrace of brain-computer interfacing is laudable, but he tends to dodge some of the tough issues—for instance, he does not emphasize the trade-off cyborgs will face between retaining human-ness and maximizing intelligence, joy, and creativity. To make this trade-off effectively, the AI portion of the cyborg will need to have a deep sense of human values.

Musk calls humanity the “biological boot loader” for AGI, but to me this colorful metaphor misses a key point—that we can seed the AGI we create with our values as an initial condition. This is one reason why it’s important that the first really powerful AGIs are created by decentralized networks, and not conventional corporate or military organizations. The decentralized software/hardware ecosystem, for all its quirks and flaws, has more potential to lead to human-computer cybernetic collective minds that are reasonable and benevolent.

Algorithmic Love
BCI is still in its infancy, but a more immediate way of connecting people with AIs to infuse both with greater love and compassion is to leverage humanoid robotics technology. Toward this end, I conceived a project called Loving AI, focused on using highly expressive humanoid robots like the Hanson robot Sophia to lead people through meditations and other exercises oriented toward unlocking the human potential for love and compassion. My goals here were to explore the potential of AI and robots to have a positive impact on human consciousness, and to use this application to study and improve the OpenCog and SingularityNET tools used to control Sophia in these interactions.

The Loving AI project has now run two small sets of human trials, both with exciting and positive results. These have been small—dozens rather than hundreds of people—but have definitively proven the point. Put a person in a quiet room with a humanoid robot that can look them in the eye, mirror their facial expressions, recognize some of their emotions, and lead them through simple meditation, listening, and consciousness-oriented exercises…and quite a lot of the time, the result is a more relaxed person who has entered into a shifted state of consciousness, at least for a period of time.

In a certain percentage of cases, the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject—a deep meditative trance state, for instance. In most cases, the result was not so extreme, but statistically the positive effect was quite significant across all cases. Furthermore, a similar effect was found using an avatar simulation of the robot’s face on a tablet screen (together with a webcam for facial expression mirroring and recognition), but not with a purely auditory interaction.

The Loving AI experiments are not only about AI; they are about human-robot and human-avatar interaction, with AI as one significant aspect. The facial interaction with the robot or avatar is pushing “biological buttons” that trigger emotional reactions and prime the mind for changes of consciousness. However, this sort of body-mind interaction is arguably critical to human values and what it means to be human; it’s an important thing for robots and AIs to “get.”

Halting or pausing the advance of AI is not a viable possibility at this stage. Despite the risks, the potential economic and political benefits involved are clear and massive. The convergence of narrow AI toward AGI is also a near inevitability, because there are so many important applications where greater generality of intelligence will lead to greater practical functionality. The challenge is to make the outcome of this great civilization-level adventure as positive as possible.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#433506 MIT’s New Robot Taught Itself to Pick ...

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.

Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.

The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.

This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.

Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.

Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.

Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.

But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.

This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.

The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.

This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.

Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.

As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.

This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.

Image Credit: Tom Buehler/CSAIL Continue reading

Posted in Human Robots

#433474 How to Feed Global Demand for ...

“You really can’t justify tuna in Chicago as a source of sustenance.” That’s according to Dr. Sylvia Earle, a National Geographic Society Explorer who was the first female chief scientist at NOAA. She came to the Good Food Institute’s Good Food Conference to deliver a call to action around global food security, agriculture, environmental protection, and the future of consumer choice.

It seems like all options should be on the table to feed an exploding population threatened by climate change. But Dr. Earle, who is faculty at Singularity University, drew a sharp distinction between seafood for sustenance versus seafood as a choice. “There is this widespread claim that we must take large numbers of wildlife from the sea in order to have food security.”

A few minutes later, Dr. Earle directly addressed those of us in the audience. “We know the value of a dead fish,” she said. That’s market price. “But what is the value of a live fish in the ocean?”

That’s when my mind blew open. What is the value—or put another way, the cost—of using the ocean as a major source of protein for humans? How do you put a number on that? Are we talking about dollars and cents, or about something far larger?

Dr. Liz Specht of the Good Food Institute drew the audience’s attention to a strange imbalance. Currently, about half of the yearly global catch of seafood comes from aquaculture. That means that the other half is wild caught. It’s hard to imagine half of your meat coming directly from the forests and the plains, isn’t it? And yet half of the world’s seafood comes from direct harvesting of the oceans, by way of massive overfishing, a terrible toll from bycatch, a widespread lack of regulation and enforcement, and even human rights violations such as slavery.

The search for solutions is on, from both within the fishing industry and from external agencies such as governments and philanthropists. Could there be another way?

Makers of plant-based seafood and clean seafood think they know how to feed the global demand for seafood without harming the ocean. These companies are part of a larger movement harnessing technology to reduce our reliance on wild and domesticated animals—and all the environmental, economic, and ethical issues that come with it.

Producers of plant-based seafood (20 or so currently) are working to capture the taste, texture, and nutrition of conventional seafood without the limitations of geography or the health of a local marine population. Like with plant-based meat, makers of plant-based seafood are harnessing food science and advances in chemistry, biology, and engineering to make great food. The industry’s strategy? Start with what the consumer wants, and then figure out how to achieve that great taste through technology.

So how does plant-based seafood taste? Pretty good, as it turns out. (The biggest benefit of a food-oriented conference is that your mouth is always full!)

I sampled “tuna” salad made from Good Catch Food’s fish-free tuna, which is sourced from legumes; the texture was nearly indistinguishable from that of flaked albacore tuna, and there was no lingering fishy taste to overpower my next bite. In a blind taste test, I probably wouldn’t have known that I was eating a plant-based seafood alternative. Next I reached for Ocean Hugger Food’s Ahimi, a tomato-based alternative to raw tuna. I adore Hawaiian poke, so I was pleasantly surprised when my Ahimi-based poke captured the bite of ahi tuna. It wasn’t quite as delightfully fatty as raw tuna, but with wild tuna populations struggling to recover from a 97% decline in numbers from 40 years ago, Ahimi is a giant stride in the right direction.

These plant-based alternatives aren’t the only game in town, however.

The clean meat industry, which has also been called “cultured meat” or “cellular agriculture,” isn’t seeking to lure consumers away from animal protein. Instead, cells are sampled from live animals and grown in bioreactors—meaning that no animal is slaughtered to produce real meat.

Clean seafood is poised to piggyback off platforms developed for clean meat; growing fish cells in the lab should rely on the same processes as growing meat cells. I know of four companies currently focusing on seafood (Finless Foods, Wild Type, BlueNalu, and Seafuture Sustainable Biotech), and a few more are likely to emerge from stealth mode soon.

Importantly, there’s likely not much difference between growing clean seafood from the top or the bottom of the food chain. Tuna, for example, are top predators that must grow for at least 10 years before they’re suitable as food. Each year, a tuna consumes thousands of pounds of other fish, shellfish, and plankton. That “long tail of groceries,” said Dr. Earle, “is a pretty expensive choice.” Excitingly, clean tuna would “level the trophic playing field,” as Dr. Specht pointed out.

All this is only the beginning of what might be possible.

Combining synthetic biology with clean meat and seafood means that future products could be personalized for individual taste preferences or health needs, by reprogramming the DNA of the cells in the lab. Industries such as bioremediation and biofuels likely have a lot to teach us about sourcing new ingredients and flavors from algae and marine plants. By harnessing rapid advances in automation, robotics, sensors, machine vision, and other big-data analytics, the manufacturing and supply chains for clean seafood could be remarkably safe and robust. Clean seafood would be just that: clean, without pathogens, parasites, or the plastic threatening to fill our oceans, meaning that you could enjoy it raw.

What about price? Dr. Mark Post, a pioneer in clean meat who is also faculty at Singularity University, estimated that 80% of clean-meat production costs come from the expensive medium in which cells are grown—and some ingredients in the medium are themselves sourced from animals, which misses the point of clean meat. Plus, to grow a whole cut of food, like a fish fillet, the cells need to be coaxed into a complex 3D structure with various cell types like muscle cells and fat cells. These two technical challenges must be solved before clean meat and seafood give consumers the experience they want, at the price they want.

In this respect clean seafood has an unusual edge. Most of what we know about growing animal cells in the lab comes from the research and biomedical industries (from tissue engineering, for example)—but growing cells to replace an organ has different constraints than growing cells for food. The link between clean seafood and biomedicine is less direct, empowering innovators to throw out dogma and find novel reagents, protocols, and equipment to grow seafood that captures the tastes, textures, smells, and overall experience of dining by the ocean.

Asked to predict when we’ll be seeing clean seafood in the grocery store, Lou Cooperhouse the CEO of BlueNalu, explained that the challenges aren’t only in the lab: marketing, sales, distribution, and communication with consumers are all critical. As Niya Gupta, the founder of Fork & Goode, said, “The question isn’t ‘can we do it’, but ‘can we sell it’?”

The good news is that the clean meat and seafood industry is highly collaborative; there are at least two dozen companies in the space, and they’re all talking to each other. “This is an ecosystem,” said Dr. Uma Valeti, the co-founder of Memphis Meats. “We’re not competing with each other.” It will likely be at least a decade before science, business, and regulation enable clean meat and seafood to routinely appear on restaurant menus, let alone market shelves.

Until then, think carefully about your food choices. Meditate on Dr. Earle’s question: “What is the real cost of that piece of halibut?” Or chew on this from Dr. Ricardo San Martin, of the Sutardja Center at the University of California, Berkeley: “Food is a system of meanings, not an object.” What are you saying when you choose your food, about your priorities and your values and how you want the future to look? Do you think about animal welfare? Most ethical regulations don’t extend to marine life, and if you don’t think that ocean creatures feel pain, consider the lobster.

Seafood is largely an acquired taste, since most of us don’t live near the water. Imagine a future in which children grow up loving the taste of delicious seafood but without hurting a living animal, the ocean, or the global environment.

Do more than imagine. As Dr. Earle urged us, “Convince the public at large that this is a really cool idea.”

Widely available
Medium availability
Emerging

Gardein
Ahimi (Ocean Hugger)
New Wave Foods

Sophie’s Kitchen
Cedar Lake
To-funa Fish

Quorn
SoFine Foods
Seamore

Vegetarian Plus
Akua
Good Catch

Heritage
Hungry Planet
Odontella

Loma Linda
Heritage Health Food
Terramino Foods

The Vegetarian Butcher
May Wah

VBites

Table based on Figure 5 of the report “An Ocean of Opportunity: Plant-based and clean seafood for sustainable oceans without sacrifice,” from The Good Food Institute.

Image Credit: Tono Balaguer / Shutterstock.com Continue reading

Posted in Human Robots