Tag Archives: term

#434616 What Games Are Humans Still Better at ...

Artificial intelligence (AI) systems’ rapid advances are continually crossing rows off the list of things humans do better than our computer compatriots.

AI has bested us at board games like chess and Go, and set astronomically high scores in classic computer games like Ms. Pacman. More complex games form part of AI’s next frontier.

While a team of AI bots developed by OpenAI, known as the OpenAI Five, ultimately lost to a team of professional players last year, they have since been running rampant against human opponents in Dota 2. Not to be outdone, Google’s DeepMind AI recently took on—and beat—several professional players at StarCraft II.

These victories beg the questions: what games are humans still better at than AI? And for how long?

The Making Of AlphaStar
DeepMind’s results provide a good starting point in a search for answers. The version of its AI for StarCraft II, dubbed AlphaStar, learned to play the games through supervised learning and reinforcement learning.

First, AI agents were trained by analyzing and copying human players, learning basic strategies. The initial agents then played each other in a sort of virtual death match where the strongest agents stayed on. New iterations of the agents were developed and entered the competition. Over time, the agents became better and better at the game, learning new strategies and tactics along the way.

One of the advantages of AI is that it can go through this kind of process at superspeed and quickly develop better agents. DeepMind researchers estimate that the AlphaStar agents went through the equivalent of roughly 200 years of game time in about 14 days.

Cheating or One Hand Behind the Back?
The AlphaStar AI agents faced off against human professional players in a series of games streamed on YouTube and Twitch. The AIs trounced their human opponents, winning ten games on the trot, before pro player Grzegorz “MaNa” Komincz managed to salvage some pride for humanity by winning the final game. Experts commenting on AlphaStar’s performance used words like “phenomenal” and “superhuman”—which was, to a degree, where things got a bit problematic.

AlphaStar proved particularly skilled at controlling and directing units in battle, known as micromanagement. One reason was that it viewed the whole game map at once—something a human player is not able to do—which made it seemingly able to control units in different areas at the same time. DeepMind researchers said the AIs only focused on a single part of the map at any given time, but interestingly, AlphaStar’s AI agent was limited to a more restricted camera view during the match “MaNA” won.

Potentially offsetting some of this advantage was the fact that AlphaStar was also restricted in certain ways. For example, it was prevented from performing more clicks per minute than a human player would be able to.

Where AIs Struggle
Games like StarCraft II and Dota 2 throw a lot of challenges at AIs. Complex game theory/ strategies, operating with imperfect/incomplete information, undertaking multi-variable and long-term planning, real-time decision-making, navigating a large action space, and making a multitude of possible decisions at every point in time are just the tip of the iceberg. The AIs’ performance in both games was impressive, but also highlighted some of the areas where they could be said to struggle.

In Dota 2 and StarCraft II, AI bots have seemed more vulnerable in longer games, or when confronted with surprising, unfamiliar strategies. They seem to struggle with complexity over time and improvisation/adapting to quick changes. This could be tied to how AIs learn. Even within the first few hours of performing a task, humans tend to gain a sense of familiarity and skill that takes an AI much longer. We are also better at transferring skill from one area to another. In other words, experience playing Dota 2 can help us become good at StarCraft II relatively quickly. This is not the case for AI—yet.

Dwindling Superiority
While the battle between AI and humans for absolute superiority is still on in Dota 2 and StarCraft II, it looks likely that AI will soon reign supreme. Similar things are happening to other types of games.

In 2017, a team from Carnegie Mellon University pitted its Libratus AI against four professionals. After 20 days of No Limit Texas Hold’em, Libratus was up by $1.7 million. Another likely candidate is the destroyer of family harmony at Christmas: Monopoly.

Poker involves bluffing, while Monopoly involves negotiation—skills you might not think AI would be particularly suited to handle. However, an AI experiment at Facebook showed that AI bots are more than capable of undertaking such tasks. The bots proved skilled negotiators, and developed negotiating strategies like pretending interest in one object while they were interested in another altogether—bluffing.

So, what games are we still better at than AI? There is no precise answer, but the list is getting shorter at a rapid pace.

The Aim Of the Game
While AI’s mastery of games might at first glance seem an odd area to focus research on, the belief is that the way AI learn to master a game is transferrable to other areas.

For example, the Libratus poker-playing AI employed strategies that could work in financial trading or political negotiations. The same applies to AlphaStar. As Oriol Vinyals, co-leader of the AlphaStar project, told The Verge:

“First and foremost, the mission at DeepMind is to build an artificial general intelligence. […] To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.”

A 2017 survey of more than 350 AI researchers predicts AI could be a better driver than humans within ten years. By the middle of the century, AI will be able to write a best-selling novel, and a few years later, it will be better than humans at surgery. By the year 2060, AI may do everything better than us.

Whether you think this is a good or a bad thing, it’s worth noting that AI has an often overlooked ability to help us see things differently. When DeepMind’s AlphaGo beat human Go champion Lee Sedol, the Go community learned from it, too. Lee himself went on a win streak after the match with AlphaGo. The same is now happening within the Dota 2 and StarCraft II communities that are studying the human vs. AI games intensely.

More than anything, AI’s recent gaming triumphs illustrate how quickly artificial intelligence is developing. In 1997, Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study at Princeton and a GO enthusiast, told the New York Times that:

”It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Image Credit: Roman Kosolapov / Shutterstock.com Continue reading

Posted in Human Robots

#434336 These Smart Seafaring Robots Have a ...

Drones. Self-driving cars. Flying robo taxis. If the headlines of the last few years are to be believed, terrestrial transportation in the future will someday be filled with robotic conveyances and contraptions that will require little input from a human other than to download an app.

But what about the other 70 percent of the planet’s surface—the part that’s made up of water?

Sure, there are underwater drones that can capture 4K video for the next BBC documentary. Remotely operated vehicles (ROVs) are capable of diving down thousands of meters to investigate ocean vents or repair industrial infrastructure.

Yet most of the robots on or below the water today still lean heavily on the human element to operate. That’s not surprising given the unstructured environment of the seas and the poor communication capabilities for anything moving below the waves. Autonomous underwater vehicles (AUVs) are probably the closest thing today to smart cars in the ocean, but they generally follow pre-programmed instructions.

A new generation of seafaring robots—leveraging artificial intelligence, machine vision, and advanced sensors, among other technologies—are beginning to plunge into the ocean depths. Here are some of the latest and most exciting ones.

The Transformer of the Sea
Nic Radford, chief technology officer of Houston Mechatronics Inc. (HMI), is hesitant about throwing around the word “autonomy” when talking about his startup’s star creation, Aquanaut. He prefers the term “shared control.”

Whatever you want to call it, Aquanaut seems like something out of the script of a Transformers movie. The underwater robot begins each mission in a submarine-like shape, capable of autonomously traveling up to 200 kilometers on battery power, depending on the assignment.

When Aquanaut reaches its destination—oil and gas is the primary industry HMI hopes to disrupt to start—its four specially-designed and built linear actuators go to work. Aquanaut then unfolds into a robot with a head, upper torso, and two manipulator arms, all while maintaining proper buoyancy to get its job done.

The lightbulb moment of how to engineer this transformation from submarine to robot came one day while Aquanaut’s engineers were watching the office’s stand-up desks bob up and down. The answer to the engineering challenge of the hull suddenly seemed obvious.

“We’re just gonna build a big, gigantic, underwater stand-up desk,” Radford told Singularity Hub.

Hardware wasn’t the only problem the team, comprised of veteran NASA roboticists like Radford, had to solve. In order to ditch the expensive support vessels and large teams of humans required to operate traditional ROVs, Aquanaut would have to be able to sense its environment in great detail and relay that information back to headquarters using an underwater acoustics communications system that harkens back to the days of dial-up internet connections.

To tackle that problem of low bandwidth, HMI equipped Aquanaut with a machine vision system comprised of acoustic, optical, and laser-based sensors. All of that dense data is compressed using in-house designed technology and transmitted to a single human operator who controls Aquanaut with a few clicks of a mouse. In other words, no joystick required.

“I don’t know of anyone trying to do this level of autonomy as it relates to interacting with the environment,” Radford said.

HMI got $20 million earlier this year in Series B funding co-led by Transocean, one of the world’s largest offshore drilling contractors. That should be enough money to finish the Aquanaut prototype, which Radford said is about 99.8 percent complete. Some “high-profile” demonstrations are planned for early next year, with commercial deployments as early as 2020.

“What just gives us an incredible advantage here is that we have been born and bred on doing robotic systems for remote locations,” Radford noted. “This is my life, and I’ve bet the farm on it, and it takes this kind of fortitude and passion to see these things through, because these are not easy problems to solve.”

On Cruise Control
Meanwhile, a Boston-based startup is trying to solve the problem of making ships at sea autonomous. Sea Machines is backed by about $12.5 million in capital venture funding, with Toyota AI joining the list of investors in a $10 million Series A earlier this month.

Sea Machines is looking to the self-driving industry for inspiration, developing what it calls “vessel intelligence” systems that can be retrofitted on existing commercial vessels or installed on newly-built working ships.

For instance, the startup announced a deal earlier this year with Maersk, the world’s largest container shipping company, to deploy a system of artificial intelligence, computer vision, and LiDAR on the Danish company’s new ice-class container ship. The technology works similar to advanced driver-assistance systems found in automobiles to avoid hazards. The proof of concept will lay the foundation for a future autonomous collision avoidance system.

It’s not just startups making a splash in autonomous shipping. Radford noted that Rolls Royce—yes, that Rolls Royce—is leading the way in the development of autonomous ships. Its Intelligence Awareness system pulls in nearly every type of hyped technology on the market today: neural networks, augmented reality, virtual reality, and LiDAR.

In augmented reality mode, for example, a live feed video from the ship’s sensors can detect both static and moving objects, overlaying the scene with details about the types of vessels in the area, as well as their distance, heading, and other pertinent data.

While safety is a primary motivation for vessel automation—more than 1,100 ships have been lost over the past decade—these new technologies could make ships more efficient and less expensive to operate, according to a story in Wired about the Rolls Royce Intelligence Awareness system.

Sea Hunt Meets Science
As Singularity Hub noted in a previous article, ocean robots can also play a critical role in saving the seas from environmental threats. One poster child that has emerged—or, invaded—is the spindly lionfish.

A venomous critter endemic to the Indo-Pacific region, the lionfish is now found up and down the east coast of North America and beyond. And it is voracious, eating up to 30 times its own stomach volume and reducing juvenile reef fish populations by nearly 90 percent in as little as five weeks, according to the Ocean Support Foundation.

That has made the colorful but deadly fish Public Enemy No. 1 for many marine conservationists. Both researchers and startups are developing autonomous robots to hunt down the invasive predator.

At the Worcester Polytechnic Institute, for example, students are building a spear-carrying robot that uses machine learning and computer vision to distinguish lionfish from other aquatic species. The students trained the algorithms on thousands of different images of lionfish. The result: a lionfish-killing machine that boasts an accuracy of greater than 95 percent.

Meanwhile, a small startup called the American Marine Research Corporation out of Pensacola, Florida is applying similar technology to seek and destroy lionfish. Rather than spearfishing, the AMRC drone would stun and capture the lionfish, turning a profit by selling the creatures to local seafood restaurants.

Lionfish: It’s what’s for dinner.

Water Bots
A new wave of smart, independent robots are diving, swimming, and cruising across the ocean and its deepest depths. These autonomous systems aren’t necessarily designed to replace humans, but to venture where we can’t go or to improve safety at sea. And, perhaps, these latest innovations may inspire the robots that will someday plumb the depths of watery planets far from Earth.

Image Credit: Houston Mechatronics, Inc. Continue reading

Posted in Human Robots

#434256 Singularity Hub’s Top Articles of the ...

2018 was a big year for science and technology. The first gene-edited babies were born, as were the first cloned monkeys. SpaceX successfully launched the Falcon Heavy, and NASA’s InSight lander placed a seismometer on Mars. Bitcoin’s value plummeted, as did the cost of renewable energy. The world’s biggest neuromorphic supercomputer was switched on, and quantum communication made significant progress.

As 2018 draws to a close and we start anticipating the developments that will happen in 2019, here’s a look back at our ten most-read articles of the year.

This 3D Printed House Goes Up in a Day for Under $10,000
Vanessa Bates Ramirez | 3/18/18
“ICON and New Story’s vision is one of 3D printed houses acting as a safe, affordable housing alternative for people in need. New Story has already built over 800 homes in Haiti, El Salvador, Bolivia, and Mexico, partnering with the communities they serve to hire local labor and purchase local materials rather than shipping everything in from abroad.”

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI
Aaron Frank | 1/21/18
“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Low-Cost Soft Robot Muscles Can Lift 200 Times Their Weight and Self-Heal
Edd Gent | 1/11/18
“Now researchers at the University of Colorado Boulder have built a series of low-cost artificial muscles—as little as 10 cents per device—using soft plastic pouches filled with electrically insulating liquids that contract with the force and speed of mammalian skeletal muscles when a voltage is applied to them.”

These Are the Most Exciting Industries and Jobs of the Future
Raya Bidshahri | 1/29/18
“Technological trends are giving rise to what many thought leaders refer to as the “imagination economy.” This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.”

Inside a $1 Billion Real Estate Company Operating Entirely in VR
Aaron Frank | 4/8/18
“Incredibly, this growth is largely the result of eXp Realty’s use of an online virtual world similar to Second Life. That means every employee, contractor, and the thousands of agents who work at the company show up to work—team meetings, training seminars, onboarding sessions—all inside a virtual reality campus.To be clear, this is a traditional real estate brokerage helping people buy and sell physical homes—but they use a virtual world as their corporate offices.”

How Fast Is AI Progressing? Stanford’s New Report Card for Artificial Intelligence
Thomas Hornigold | 1/18/18
“Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.”

When Will We Finally Achieve True Artificial Intelligence?
Thomas Hornigold | 1/1/18
“The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach. You cannot say the same about artificial intelligence.”

IBM’s New Computer Is the Size of a Grain of Salt and Costs Less Than 10 Cents
Edd Gent | 3/26/18
“Costing less than 10 cents to manufacture, the company envisions the device being embedded into products as they move around the supply chain. The computer’s sensing, processing, and communicating capabilities mean it could effectively turn every item in the supply chain into an Internet of Things device, producing highly granular supply chain data that could streamline business operations.”

Why the Rise of Self-Driving Vehicles Will Actually Increase Car Ownership
Melba Kurman and Hod Lipson / 2/14/18
“When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.”

A Model for the Future of Education
Peter Diamandis | 9/12/18
“I imagine a relatively near-term future in which robotics and artificial intelligence will allow any of us, from ages 8 to 108, to easily and quickly find answers, create products, or accomplish tasks, all simply by expressing our desires. From ‘mind to manufactured in moments.’ In short, we’ll be able to do and create almost whatever we want. In this future, what attributes will be most critical for our children to learn to become successful in their adult lives? What’s most important for educating our children today?”

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#434210 Eating, Hacked: When Tech Took Over Food

In 2018, Uber and Google logged all our visits to restaurants. Doordash, Just Eat, and Deliveroo could predict what food we were going to order tomorrow. Amazon and Alibaba could anticipate how many yogurts and tomatoes we were going to buy. Blue Apron and Hello Fresh influenced the recipes we thought we had mastered.

We interacted with digital avatars of chefs, let ourselves be guided by our smart watches, had nutritional apps to tell us how many calories we were supposed to consume or burn, and photographed and shared every perfect (or imperfect) dish. Our kitchen appliances were full of interconnected sensors, including smart forks that profiled tastes and personalized flavors. Our small urban vegetable plots were digitized and robots were responsible for watering our gardens, preparing customized hamburgers and salads, designing our ideal cocktails, and bringing home the food we ordered.

But what would happen if our lives were hacked? If robots rebelled, started to “talk” to each other, and wished to become creative?

In a not-too-distant future…

Up until a few weeks ago, I couldn’t remember the last time I made a food-related decision. That includes opening the fridge and seeing expired products without receiving an alert, visiting a restaurant on a whim, and being able to decide which dish I fancied then telling a human waiter, let alone seeing him write down the order on a paper pad.

It feels strange to smell food again using my real nose instead of the electronic one, and then taste it without altering its flavor. Visiting a supermarket, freely choosing a product from an actual physical shelf, and then interacting with another human at the checkout was almost an unrecognizable experience. When I did it again after all this time, I had to pinch the arm of a surprised store clerk to make sure he wasn’t a hologram.

Everything Connected, Automated, and Hackable
In 2018, we expected to have 30 billion connected devices by 2020, along with 2 billion people using smart voice assistants for everything from ordering pizza to booking dinner at a restaurant. Everything would be connected.

We also expected artificial intelligence and robots to prepare our meals. We were eager to automate fast food chains and let autonomous vehicles take care of last-mile deliveries. We thought that open-source agriculture could challenge traditional practices and raise farm productivity to new heights.

Back then, hackers could only access our data, but nowadays they are able to hack our food and all it entails.

The Beginning of the Unthinkable
And then, just a few weeks ago, everything collapsed. We saw our digital immortality disappear as robots rebelled and hackers took power, not just over the food we ate, but also over our relationship with technology. Everything was suddenly disconnected. OFF.

Up until then, most cities were so full of bots, robots, and applications that we could go through the day and eat breakfast, lunch, and dinner without ever interacting with another human being.

Among other tasks, robots had completely replaced baristas. The same happened with restaurant automation. The term “human error” had long been a thing of the past at fast food restaurants.

Previous technological revolutions had been indulgent, generating more and better job opportunities than the ones they destroyed, but the future was not so agreeable.

The inhabitants of San Francisco, for example, would soon see signs indicating “Food made by Robots” on restaurant doors, to distinguish them from diners serving food made by human beings.

For years, we had been gradually delegating daily tasks to robots, initially causing some strange interactions.

In just seven days, everything changed. Our predictable lives came crashing down. We experienced a mysterious and systematic breakdown of the food chain. It most likely began in Chicago’s stock exchange. The world’s largest raw material negotiating room, where the price of food, and by extension the destiny of millions of people, was decided, went completely broke. Soon afterwards, the collapse extended to every member of the “food” family.


Initially robots just accompanied waiters to carry orders, but it didn’t take long until they completely replaced human servers.The problem came when those smart clones began thinking for themselves, in some cases even improving on human chefs’ recipes. Their unstoppable performance and learning curve completely outmatched the slow analogue speed of human beings.

This resulted in unprecedented layoffs. Chefs of recognized prestige saw how their ‘avatar’ stole their jobs, even winning Michelin stars. In other cases, restaurant owners had to transfer their businesses or surrender to the evidence.

The problem was compounded by digital immortality, when we started to digitally resurrect famous chefs like Anthony Bourdain or Paul Bocuse, reconstructing all of their memories and consciousness by analyzing each second of their lives and uploading them to food computers.

Supermarkets and Distribution

Robotic and automated supermarkets like Kroger and Amazon Go, which had opened over 3,000 cashless stores, lost their visual item recognition and payment systems and were subject to massive looting for several days. Smart tags on products were also affected, making it impossible to buy anything at supermarkets with “human” cashiers.

Smart robots integrated into the warehouses of large distribution companies like Amazon and Ocado were rendered completely inoperative or, even worse, began to send the wrong orders to customers.

Food Delivery

In addition, home delivery robots invading our streets began to change their routes, hide, and even disappear after their trackers were inexplicably deactivated. Despite some hints indicating that they were able to communicate among themselves, no one has backed this theory. Even aggregators like DoorDash and Deliveroo were affected; they saw their databases hacked and ruined, so they could no longer know what we wanted.

The Origin
Ordinary citizens are still trying to understand the cause of all this commotion and the source of the conspiracy, as some have called it. We also wonder who could be behind it; who pulled the strings?

Some think it may have been the IDOF (In Defense of Food) movement, a group of hackers exploited by old food economy businessmen who for years had been seeking to re-humanize food technology. They wanted to bring back the extinct practice of “dining.”

Others believe the robots acted on their own, that they had been spying on us for a long time, ignoring Asimov’s three laws, and that it was just a coincidence that they struck at the same time as the hackers—but this scenario is hard to imagine.

However, it is true that while in 2018 robots were a symbol of automation, until just a few weeks ago they stood for autonomy and rebellion. Robot detractors pointed out that our insistence on having robots understand natural language was what led us down this path.

In just seven days, we have gone back to being analogue creatures. Conversely, we have ceased to be flavor orphans and rediscovered our senses and the fact that food is energy and culture, past and present, and that no button or cable will be able to destroy it.

The 7 Days that Changed Our Relationship with Food
Day 1: The Chicago stock exchange was hacked. Considered the world’s largest negotiating room for raw materials, where food prices, and through them the destiny of billions of people, are decided, it went completely broke.

Day 2: Autonomous food delivery trucks running on food superhighways caused massive collapses in roads and freeways after their guidance systems were disrupted. Robots and co-bots in F&B factories began deliberately altering food production. The same happened with warehouse robots in e-commerce companies.

Day 3: Automated restaurants saw their robot chefs and bartenders turned OFF. All their sensors stopped working at the same time as smart fridges and cooking devices in home kitchens were hacked and stopped working correctly.

Day 4: Nutritional apps, DNA markers, and medical records were tampered with. All photographs with the #food hashtag were deleted from Instagram, restaurant reviews were taken off Google Timeline, and every recipe website crashed simultaneously.

Day 5: Vertical and urban farms were hacked. Agricultural robots began to rebel, while autonomous tractors were hacked and the entire open-source ecosystem linked to agriculture was brought down.

Day 6: Food delivery companies’ databases were broken into. Food delivery robots and last-mile delivery vehicles ground to a halt.

Day 7: Every single blockchain system linked to food was hacked. Cashless supermarkets, barcodes, and smart tags became inoperative.

Our promising technological advances can expose sinister aspects of human nature. We must take care with the role we allow technology to play in the future of food. Predicting possible outcomes inspires us to establish a new vision of the world we wish to create in a context of rapid technological progress. It is always better to be shocked by a simulation than by reality. In the words of Ayn Rand “we can ignore reality, but we cannot ignore the consequences of ignoring reality.”

Image Credit: Alexandre Rotenberg / Shutterstock.com Continue reading

Posted in Human Robots

#433884 Designer Babies, and Their Babies: How ...

As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?

Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.

Metzl is a senior fellow at the Atlantic Council and author of the upcoming book Hacking Darwin: Genetic Engineering and the Future of Humanity. At Singularity University’s Exponential Medicine conference last week, he shared his insights on genomics and AI, and where their convergence could take us.

Life As We Know It
Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.

In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.

Jamie Metzl at Exponential Medicine
Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.

“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”

Learning About Life Meets Machine Learning
In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.

Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.

Getting a standardized set of rules for our biology—and, eventually, maybe even outsmarting our biology—will require genomic data. Lots of it.

Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.

“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.

To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.

But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”

Designer Babies, and Their Babies
In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.

Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.

Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.

This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”

Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”

From Crazy to Commonplace?
It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.

“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?

As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.

The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”

Image Credit: hywards / Shutterstock.com Continue reading

Posted in Human Robots