Tag Archives: rapid

#433474 How to Feed Global Demand for ...

“You really can’t justify tuna in Chicago as a source of sustenance.” That’s according to Dr. Sylvia Earle, a National Geographic Society Explorer who was the first female chief scientist at NOAA. She came to the Good Food Institute’s Good Food Conference to deliver a call to action around global food security, agriculture, environmental protection, and the future of consumer choice.

It seems like all options should be on the table to feed an exploding population threatened by climate change. But Dr. Earle, who is faculty at Singularity University, drew a sharp distinction between seafood for sustenance versus seafood as a choice. “There is this widespread claim that we must take large numbers of wildlife from the sea in order to have food security.”

A few minutes later, Dr. Earle directly addressed those of us in the audience. “We know the value of a dead fish,” she said. That’s market price. “But what is the value of a live fish in the ocean?”

That’s when my mind blew open. What is the value—or put another way, the cost—of using the ocean as a major source of protein for humans? How do you put a number on that? Are we talking about dollars and cents, or about something far larger?

Dr. Liz Specht of the Good Food Institute drew the audience’s attention to a strange imbalance. Currently, about half of the yearly global catch of seafood comes from aquaculture. That means that the other half is wild caught. It’s hard to imagine half of your meat coming directly from the forests and the plains, isn’t it? And yet half of the world’s seafood comes from direct harvesting of the oceans, by way of massive overfishing, a terrible toll from bycatch, a widespread lack of regulation and enforcement, and even human rights violations such as slavery.

The search for solutions is on, from both within the fishing industry and from external agencies such as governments and philanthropists. Could there be another way?

Makers of plant-based seafood and clean seafood think they know how to feed the global demand for seafood without harming the ocean. These companies are part of a larger movement harnessing technology to reduce our reliance on wild and domesticated animals—and all the environmental, economic, and ethical issues that come with it.

Producers of plant-based seafood (20 or so currently) are working to capture the taste, texture, and nutrition of conventional seafood without the limitations of geography or the health of a local marine population. Like with plant-based meat, makers of plant-based seafood are harnessing food science and advances in chemistry, biology, and engineering to make great food. The industry’s strategy? Start with what the consumer wants, and then figure out how to achieve that great taste through technology.

So how does plant-based seafood taste? Pretty good, as it turns out. (The biggest benefit of a food-oriented conference is that your mouth is always full!)

I sampled “tuna” salad made from Good Catch Food’s fish-free tuna, which is sourced from legumes; the texture was nearly indistinguishable from that of flaked albacore tuna, and there was no lingering fishy taste to overpower my next bite. In a blind taste test, I probably wouldn’t have known that I was eating a plant-based seafood alternative. Next I reached for Ocean Hugger Food’s Ahimi, a tomato-based alternative to raw tuna. I adore Hawaiian poke, so I was pleasantly surprised when my Ahimi-based poke captured the bite of ahi tuna. It wasn’t quite as delightfully fatty as raw tuna, but with wild tuna populations struggling to recover from a 97% decline in numbers from 40 years ago, Ahimi is a giant stride in the right direction.

These plant-based alternatives aren’t the only game in town, however.

The clean meat industry, which has also been called “cultured meat” or “cellular agriculture,” isn’t seeking to lure consumers away from animal protein. Instead, cells are sampled from live animals and grown in bioreactors—meaning that no animal is slaughtered to produce real meat.

Clean seafood is poised to piggyback off platforms developed for clean meat; growing fish cells in the lab should rely on the same processes as growing meat cells. I know of four companies currently focusing on seafood (Finless Foods, Wild Type, BlueNalu, and Seafuture Sustainable Biotech), and a few more are likely to emerge from stealth mode soon.

Importantly, there’s likely not much difference between growing clean seafood from the top or the bottom of the food chain. Tuna, for example, are top predators that must grow for at least 10 years before they’re suitable as food. Each year, a tuna consumes thousands of pounds of other fish, shellfish, and plankton. That “long tail of groceries,” said Dr. Earle, “is a pretty expensive choice.” Excitingly, clean tuna would “level the trophic playing field,” as Dr. Specht pointed out.

All this is only the beginning of what might be possible.

Combining synthetic biology with clean meat and seafood means that future products could be personalized for individual taste preferences or health needs, by reprogramming the DNA of the cells in the lab. Industries such as bioremediation and biofuels likely have a lot to teach us about sourcing new ingredients and flavors from algae and marine plants. By harnessing rapid advances in automation, robotics, sensors, machine vision, and other big-data analytics, the manufacturing and supply chains for clean seafood could be remarkably safe and robust. Clean seafood would be just that: clean, without pathogens, parasites, or the plastic threatening to fill our oceans, meaning that you could enjoy it raw.

What about price? Dr. Mark Post, a pioneer in clean meat who is also faculty at Singularity University, estimated that 80% of clean-meat production costs come from the expensive medium in which cells are grown—and some ingredients in the medium are themselves sourced from animals, which misses the point of clean meat. Plus, to grow a whole cut of food, like a fish fillet, the cells need to be coaxed into a complex 3D structure with various cell types like muscle cells and fat cells. These two technical challenges must be solved before clean meat and seafood give consumers the experience they want, at the price they want.

In this respect clean seafood has an unusual edge. Most of what we know about growing animal cells in the lab comes from the research and biomedical industries (from tissue engineering, for example)—but growing cells to replace an organ has different constraints than growing cells for food. The link between clean seafood and biomedicine is less direct, empowering innovators to throw out dogma and find novel reagents, protocols, and equipment to grow seafood that captures the tastes, textures, smells, and overall experience of dining by the ocean.

Asked to predict when we’ll be seeing clean seafood in the grocery store, Lou Cooperhouse the CEO of BlueNalu, explained that the challenges aren’t only in the lab: marketing, sales, distribution, and communication with consumers are all critical. As Niya Gupta, the founder of Fork & Goode, said, “The question isn’t ‘can we do it’, but ‘can we sell it’?”

The good news is that the clean meat and seafood industry is highly collaborative; there are at least two dozen companies in the space, and they’re all talking to each other. “This is an ecosystem,” said Dr. Uma Valeti, the co-founder of Memphis Meats. “We’re not competing with each other.” It will likely be at least a decade before science, business, and regulation enable clean meat and seafood to routinely appear on restaurant menus, let alone market shelves.

Until then, think carefully about your food choices. Meditate on Dr. Earle’s question: “What is the real cost of that piece of halibut?” Or chew on this from Dr. Ricardo San Martin, of the Sutardja Center at the University of California, Berkeley: “Food is a system of meanings, not an object.” What are you saying when you choose your food, about your priorities and your values and how you want the future to look? Do you think about animal welfare? Most ethical regulations don’t extend to marine life, and if you don’t think that ocean creatures feel pain, consider the lobster.

Seafood is largely an acquired taste, since most of us don’t live near the water. Imagine a future in which children grow up loving the taste of delicious seafood but without hurting a living animal, the ocean, or the global environment.

Do more than imagine. As Dr. Earle urged us, “Convince the public at large that this is a really cool idea.”

Widely available
Medium availability
Emerging

Gardein
Ahimi (Ocean Hugger)
New Wave Foods

Sophie’s Kitchen
Cedar Lake
To-funa Fish

Quorn
SoFine Foods
Seamore

Vegetarian Plus
Akua
Good Catch

Heritage
Hungry Planet
Odontella

Loma Linda
Heritage Health Food
Terramino Foods

The Vegetarian Butcher
May Wah

VBites

Table based on Figure 5 of the report “An Ocean of Opportunity: Plant-based and clean seafood for sustainable oceans without sacrifice,” from The Good Food Institute.

Image Credit: Tono Balaguer / Shutterstock.com Continue reading

Posted in Human Robots

#433278 Outdated Evolution: Updating Our ...

What happens when evolution shapes an animal for tribes of 150 primitive individuals living in a chaotic jungle, and then suddenly that animal finds itself living with millions of others in an engineered metropolis, their pockets all bulging with devices of godlike power?

The result, it seems, is a modern era of tension where archaic forms of governance struggle to keep up with the technological advances of their citizenry, where governmental policies act like constraining bottlenecks rather than spearheads of progress.

Simply put, our governments have failed to adapt to disruptive technologies. And if we are to regain our stability moving forward into a future of even greater disruption, it’s imperative that we understand the issues that got us into this situation and what kind of solutions we can engineer to overcome our governmental weaknesses.

Hierarchy vs. Technological Decentralization
Many of the greatest issues our governments face today come from humanity’s biologically-hardwired desire for centralized hierarchies. This innate proclivity towards building and navigating systems of status and rank were evolutionary gifts handed down to us by our ape ancestors, where each member of a community had a mental map of their social hierarchy. Their nervous systems behaved differently depending on their rank in this hierarchy, influencing their interactions in a way that ensured only the most competent ape would rise to the top to gain access to the best food and mates.

As humanity emerged and discovered the power of language, we continued this practice by ensuring that those at the top of the hierarchies, those with the greatest education and access to information, were the dominant decision-makers for our communities.

However, this kind of structured chain of power is only necessary if we’re operating in conditions of scarcity. But resources, including information, are no longer scarce.

It’s estimated that more than two-thirds of adults in the world now own a smartphone, giving the average citizen the same access to the world’s information as the leaders of our governments. And with global poverty falling from 35.5 percent to 10.9 percent over the last 25 years, our younger generations are growing up seeing automation and abundance as a likely default, where innovations like solar energy, lab-grown meat, and 3D printing are expected to become commonplace.

It’s awareness of this paradigm shift that has empowered the recent rise of decentralization. As information and access to resources become ubiquitous, there is noticeably less need for our inefficient and bureaucratic hierarchies.

For example, if blockchain can prove its feasibility for large-scale systems, it can be used to update and upgrade numerous applications to a decentralized model, including currency and voting. Such innovations would lower the risk of failing banks collapsing the economy like they did in 2008, as well as prevent corrupt politicians from using gerrymandering and long queues at polling stations to deter voter participation.

Of course, technology isn’t a magic wand that should be implemented carelessly. Facebook’s “move fast and break things” approach might have very possibly broken American democracy in 2016, as social media played on some of the worst tendencies humanity can operate on during an election: fear and hostility.

But if decentralized technology, like blockchain’s public ledgers, can continue to spread a sense of security and transparency throughout society, perhaps we can begin to quiet that paranoia and hyper-vigilance our brains evolved to cope with living as apes in dangerous jungles. By decentralizing our power structures, we take away the channels our outdated biological behaviors might use to enact social dominance and manipulation.

The peace of mind this creates helps to reestablish trust in our communities and in our governments. And with trust in the government increased, it’s likely we’ll see our next issue corrected.

From Business and Law to Science and Technology
A study found that 59 percent of US presidents, 68 percent of vice presidents, and 78 percent of secretaries of state were lawyers by education and occupation. That’s more than one out of every two people in the most powerful positions in the American government restricted to a field dedicated to convincing other people (judges) their perspective is true, even if they lack evidence.

And so the scientific method became less important than semantics to our leaders.

Similarly, of the 535 individuals in the American congress, only 24 hold a PhD, only 2 of which are in a STEM field. And so far, it’s not getting better: Trump is the first president since WWII not to name a science advisor.

But if we can use technologies like blockchain to increase transparency, efficiency, and trust in the government, then the upcoming generations who understand decentralization, abundance, and exponential technologies might feel inspired enough to run for government positions. This helps solve that common problem where the smartest and most altruistic people tend to avoid government positions because they don’t want to play the semantic and deceitful game of politics.

By changing this narrative, our governments can begin to fill with techno-progressive individuals who actually understand the technologies that are rapidly reshaping our reality. And this influence of expertise is going to be crucial as our governments are forced to restructure and create new policies to accommodate the incoming disruption.

Clearing Regulations to Begin Safe Experimentation
As exponential technologies become more ubiquitous, we’re likely going to see young kids and garage tinkerers creating powerful AIs and altering genetics thanks to tools like CRISPR and free virtual reality tutorials.

This easy accessibility to such powerful technology means unexpected and rapid progress can occur almost overnight, quickly overwhelming our government’s regulatory systems.

Uber and Airbnb are two of the best examples of our government’s inability to keep up with such technology, both companies achieving market dominance before regulators were even able to consider how to handle them. And when a government has decided against them, they often still continue to operate because people simply choose to keep using the apps.

Luckily, this kind of disruption hasn’t yet posed a major existential threat. But this will change when we see companies begin developing cyborg body parts, brain-computer interfaces, nanobot health injectors, and at-home genetic engineering kits.

For this reason, it’s crucial that we have experts who understand how to update our regulations to be as flexible as is necessary to ensure we don’t create black market conditions like we’ve done with drugs. It’s better to have safe and monitored experimentation, rather than forcing individuals into seedy communities using unsafe products.

Survival of the Most Adaptable
If we hope to be an animal that survives our changing environment, we have to adapt. We cannot cling to the behaviors and systems formed thousands of years ago. We must instead acknowledge that we now exist in an ecosystem of disruptive technology, and we must evolve and update our governments if they’re going to be capable of navigating these transformative impacts.

Image Credit: mmatee / Shutterstock.com Continue reading

Posted in Human Robots

#432549 Your Next Pilot Could Be Drone Software

Would you get on a plane that didn’t have a human pilot in the cockpit? Half of air travelers surveyed in 2017 said they would not, even if the ticket was cheaper. Modern pilots do such a good job that almost any air accident is big news, such as the Southwest engine disintegration on April 17.

But stories of pilot drunkenness, rants, fights and distraction, however rare, are reminders that pilots are only human. Not every plane can be flown by a disaster-averting pilot, like Southwest Capt. Tammie Jo Shults or Capt. Chesley “Sully” Sullenberger. But software could change that, equipping every plane with an extremely experienced guidance system that is always learning more.

In fact, on many flights, autopilot systems already control the plane for basically all of the flight. And software handles the most harrowing landings—when there is no visibility and the pilot can’t see anything to even know where he or she is. But human pilots are still on hand as backups.

A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have—ever. By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.

Drones That Fly Themselves
Drones come in many forms, from tiny quad-rotor copter toys to missile-firing winged planes, or even 7-ton aircraft that can stay aloft for 34 hours at a stretch.

When drones were first introduced, they were flown remotely by human operators. However, this merely substitutes a pilot on the ground for one aloft. And it requires significant communications bandwidth between the drone and control center, to carry real-time video from the drone and to transmit the operator’s commands.

Many newer drones no longer need pilots; some drones for hobbyists and photographers can now fly themselves along human-defined routes, leaving the human free to sightsee—or control the camera to get the best view.

University researchers, businesses, and military agencies are now testing larger and more capable drones that will operate autonomously. Swarms of drones can fly without needing tens or hundreds of humans to control them. And they can perform coordinated maneuvers that human controllers could never handle.

Could humans control these 1,218 drones all together?

Whether flying in swarms or alone, the software that controls these drones is rapidly gaining flight experience.

Importance of Pilot Experience
Experience is the main qualification for pilots. Even a person who wants to fly a small plane for personal and noncommercial use needs 40 hours of flying instruction before getting a private pilot’s license. Commercial airline pilots must have at least 1,000 hours before even serving as a co-pilot.

On-the-ground training and in-flight experience prepare pilots for unusual and emergency scenarios, ideally to help save lives in situations like the “Miracle on the Hudson.” But many pilots are less experienced than “Sully” Sullenberger, who saved his planeload of people with quick and creative thinking. With software, though, every plane can have on board a pilot with as much experience—if not more. A popular software pilot system, in use in many aircraft at once, could gain more flight time each day than a single human might accumulate in a year.

As someone who studies technology policy as well as the use of artificial intelligence for drones, cars, robots, and other uses, I don’t lightly suggest handing over the controls for those additional tasks. But giving software pilots more control would maximize computers’ advantages over humans in training, testing, and reliability.

Training and Testing Software Pilots
Unlike people, computers will follow sets of instructions in software the same way every time. That lets developers create instructions, test reactions, and refine aircraft responses. Testing could make it far less likely, for example, that a computer would mistake the planet Venus for an oncoming jet and throw the plane into a steep dive to avoid it.

The most significant advantage is scale: Rather than teaching thousands of individual pilots new skills, updating thousands of aircraft would require only downloading updated software.

These systems would also need to be thoroughly tested—in both real-life situations and in simulations—to handle a wide range of aviation situations and to withstand cyberattacks. But once they’re working well, software pilots are not susceptible to distraction, disorientation, fatigue, or other human impairments that can create problems or cause errors even in common situations.

Rapid Response and Adaptation
Already, aircraft regulators are concerned that human pilots are forgetting how to fly on their own and may have trouble taking over from an autopilot in an emergency.

In the “Miracle on the Hudson” event, for example, a key factor in what happened was how long it took for the human pilots to figure out what had happened—that the plane had flown through a flock of birds, which had damaged both engines—and how to respond. Rather than the approximately one minute it took the humans, a computer could have assessed the situation in seconds, potentially saving enough time that the plane could have landed on a runway instead of a river.

Aircraft damage can pose another particularly difficult challenge for human pilots: It can change what effects the controls have on its flight. In cases where damage renders a plane uncontrollable, the result is often tragedy. A sufficiently advanced automated system could make minute changes to the aircraft’s steering and use its sensors to quickly evaluate the effects of those movements—essentially learning how to fly all over again with a damaged plane.

Boosting Public Confidence
The biggest barrier to fully automated flight is psychological, not technical. Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds, or thousands more hours of flight experience than any human pilot.

Other autonomous technologies, too, are progressing despite public concerns. Regulators and lawmakers are allowing self-driving cars on the roads in many states. But more than half of Americans don’t want to ride in one, largely because they don’t trust the technology. And only 17 percent of travelers around the world are willing to board a plane without a pilot. However, as more people experience self-driving cars on the road and have drones deliver them packages, it is likely that software pilots will gain in acceptance.

The airline industry will certainly be pushing people to trust the new systems: Automating pilots could save tens of billions of dollars a year. And the current pilot shortage means software pilots may be the key to having any airline service to smaller destinations.

Both Boeing and Airbus have made significant investments in automated flight technology, which would remove or reduce the need for human pilots. Boeing has actually bought a drone manufacturer and is looking to add software pilot capabilities to the next generation of its passenger aircraft. (Other tests have tried to retrofit existing aircraft with robotic pilots.)

One way to help regular passengers become comfortable with software pilots—while also helping to both train and test the systems—could be to introduce them as co-pilots working alongside human pilots. Planes would be operated by software from gate to gate, with the pilots instructed to touch the controls only if the system fails. Eventually pilots could be removed from the aircraft altogether, just like they eventually were from the driverless trains that we routinely ride in airports around the world.

This article was originally published on The Conversation. Read the original article.

Image Credit: Skycolors / Shutterstock.com Continue reading

Posted in Human Robots

#432190 In the Future, There Will Be No Limit to ...

New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.

While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.

One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.

The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.

The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.

AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.

Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.

AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.

AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.

That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.

Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.

The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.

“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.

“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”

The research was published last month in Geophysical Research Letters.

Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.

Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.

“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.

His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.

Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.

King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.

“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.

Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.

For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.

Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.

In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.

ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:

King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.

“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”

Image Credit: Romaset / Shutterstock.com Continue reading

Posted in Human Robots

#432036 The Power to Upgrade Our Own Biology Is ...

Upgrading our biology may sound like science fiction, but attempts to improve humanity actually date back thousands of years. Every day, we enhance ourselves through seemingly mundane activities such as exercising, meditating, or consuming performance-enhancing drugs, such as caffeine or adderall. However, the tools with which we upgrade our biology are improving at an accelerating rate and becoming increasingly invasive.

In recent decades, we have developed a wide array of powerful methods, such as genetic engineering and brain-machine interfaces, that are redefining our humanity. In the short run, such enhancement technologies have medical applications and may be used to treat many diseases and disabilities. Additionally, in the coming decades, they could allow us to boost our physical abilities or even digitize human consciousness.

What’s New?
Many futurists argue that our devices, such as our smartphones, are already an extension of our cortex and in many ways an abstract form of enhancement. According to philosophers Andy Clark and David Chalmers’ theory of extended mind, we use technology to expand the boundaries of the human mind beyond our skulls.

One can argue that having access to a smartphone enhances one’s cognitive capacities and abilities and is an indirect form of enhancement of its own. It can be considered an abstract form of brain-machine interface. Beyond that, wearable devices and computers are already accessible in the market, and people like athletes use them to boost their progress.

However, these interfaces are becoming less abstract.

Not long ago, Elon Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing brains to communicate with one another purely through brainwaves.

Ethical Challenges of Enhancement
There are many social and ethical implications of such advancements.

One of the most fundamental issues with cognitive and physical enhancement techniques is that they contradict the very definition of merit and success that society has relied on for millennia. Many forms of performance-enhancing drugs have been considered “cheating” for the longest time.

But perhaps we ought to revisit some of our fundamental assumptions as a society.

For example, we like to credit hard work and talent in a fair manner, where “fair” generally implies that an individual has acted in a way that has served him to merit his rewards. If you are talented and successful, it is considered to be because you chose to work hard and take advantage of the opportunities available to you. But by these standards, how much of our accomplishments can we truly be credited for?

For instance, the genetic lottery can have an enormous impact on an individual’s predisposition and personality, which can in turn affect factors such as motivation, reasoning skills, and other mental abilities. Many people are born with a natural ability or a physique that gives them an advantage in a particular area or predisposes them to learn faster. But is it justified to reward someone for excellence if their genes had a pivotal role in their path to success?

Beyond that, there are already many ways in which we take “shortcuts” to better mental performance. Seemingly mundane activities like drinking coffee, meditating, exercising, or sleeping well can boost one’s performance in any given area and are tolerated by society. Even the use of language can have positive physical and psychological effects on the human brain, which can be liberating to the individual and immensely beneficial to society at large. And let’s not forget the fact that some of us are born into more access to developing literacy than others.

Given all these reasons, one could argue that cognitive abilities and talents are currently derived more from uncontrollable factors and luck than we like to admit. If anything, technologies like brain-machine interfaces can enhance individual autonomy and allow one a choice of how capable they become.

As Karim Jebari points out (pdf), if a certain characteristic or trait is required to perform a particular role and an individual lacks this trait, would it be wrong to implement the trait through brain-machine interfaces or genetic engineering? How is this different from any conventional form of learning or acquiring a skill? If anything, this would be removing limitations on individuals that result from factors outside their control, such as biological predisposition (or even traits induced from traumatic experiences) to act or perform in a certain way.

Another major ethical concern is equality. As with any other emerging technology, there are valid concerns that cognitive enhancement tech will benefit only the wealthy, thus exacerbating current inequalities. This is where public policy and regulations can play a pivotal role in the impact of technology on society.

Enhancement technologies can either contribute to inequality or allow us to solve it. Educating and empowering the under-privileged can happen at a much more rapid rate, helping the overall rate of human progress accelerate. The “normal range” for human capacity and intelligence, however it is defined, could shift dramatically towards more positive trends.

Many have also raised concerns over the negative applications of government-led biological enhancement, including eugenics-like movements and super-soldiers. Naturally, there are also issues of safety, security, and well-being, especially within the early stages of experimentation with enhancement techniques.

Brain-machine interfaces, for instance, could have implications on autonomy. The interface involves using information extracted from the brain to stimulate or modify systems in order to accomplish a goal. This part of the process can be enhanced by implementing an artificial intelligence system onto the interface—one that exposes the possibility of a third party potentially manipulating individual’s personalities, emotions, and desires by manipulating the interface.

A Tool For Transcendence
It’s important to discuss these risks, not so that we begin to fear and avoid such technologies, but so that we continue to advance in a way that minimizes harm and allows us to optimize the benefits.

Stephen Hawking notes that “with genetic engineering, we will be able to increase the complexity of our DNA, and improve the human race.” Indeed, the potential advantages of modifying biology are revolutionary. Doctors would gain access to a powerful tool to tackle disease, allowing us to live longer and healthier lives. We might be able to extend our lifespan and tackle aging, perhaps a critical step to becoming a space-faring species. We may begin to modify the brain’s building blocks to become more intelligent and capable of solving grand challenges.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. Human enhancement is bringing us closer to such a world—it could allow us to take control of our evolution and truly shape the future of our species.

Image Credit: GrAl/ Shutterstock.com Continue reading

Posted in Human Robots