Tag Archives: current

#437202 Scientists Used Dopamine to Seamlessly ...

In just half a decade, neuromorphic devices—or brain-inspired computing—already seem quaint. The current darling? Artificial-biological hybrid computing, uniting both man-made computer chips and biological neurons seamlessly into semi-living circuits.

It sounds crazy, but a new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward.

Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit.

The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.

The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care?

Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Neuromorphic Computing
The study grew from years of work into neuromorphic computing, or data processing inspired by the brain.

The blue-sky idea was inspired by the brain’s massive parallel computing capabilities, along with vast energy savings. By mimicking these properties, scientists reasoned, we could potentially turbo-charge computing. Neuromorphic devices basically embody artificial neural networks in physical form—wouldn’t hardware that mimics how the brain processes information be even more efficient and powerful?

These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.

As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?

Buzz & Chemistry
Theoretically, the answer’s yes.

But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical.

Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.

In other words, without chemical signals, the brain can’t function normally. These neurotransmitters don’t just passively carry information. Dopamine, for example, can dramatically change how a neural circuit functions. For an artificial-biological hybrid neural system, the absence of chemistry is like nixing international cargo vessels and only sticking with land-based trains and highways.

“To emulate biological synaptic behavior, the connectivity of the neuromorphic device must be dynamically regulated by the local neurotransmitter activity,” the team said.

Let’s Get Electro-Chemical
The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials.

Rather than the classic neuron shape, picture more of a sandwich with a chunk bitten out in the middle (yup, I’m totally serious). Each of the remaining parts of the sandwich is a soft electrode, made of biological polymers. The “bitten out” part has a conductive solution that can pass on electrical signals.

The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.

In other words, dopamine release from the biological neuron interacts with the artificial one, so that the chemicals change how the downstream neuron behaves in a somewhat lasting way—a loose mimic of what happens inside the brain during learning.

But that’s not all. Chemical signaling is especially powerful in the brain because it’s flexible. Dopamine, for example, only grabs onto the downstream neurons for a bit before it returns back to its upstream neuron—that is, recycled or destroyed. This means that its effect is temporary, giving the neural circuit breathing room to readjust its activity.

The Stanford team also tried reconstructing this quirk in their hybrid circuit. They crafted a microfluidic channel that shuttles both dopamine and its byproduct away from the artificial neurons after they’ve done their job for recycling.

Putting It All Together
After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.”

They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one. Before the experiment, the team wasn’t quite sure what to expect. Theoretically, it made sense that dopamine would change the artificial neuron’s conductance, similar to learning. But “it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab,” said study author Scott Keene.

On the first try, however, the team found that the burst of chemical signaling was able to change the artificial neuron’s conductance long-term, similar to the neuroscience dogma “neurons that fire together, wire together.” Activating the upstream biological neuron with chemicals also changed the artificial neuron’s conductance in a way that mimicked learning.

“That’s when we realized the potential this has for emulating the long-term learning process of a synapse,” said Keene.

Visualizing under an electron microscope, the team found that, similar to its biological counterpart, the hybrid synapse was able to efficiently recycle dopamine with timescales similar to the brain after some calibration. By playing with how much dopamine accumulates at the artificial neuron, the team found that they loosely mimic a learning rule called spike learning—a darling of machine learning inspired by the brain’s computation.

A Hybrid Future?
Unfortunately for cyborg enthusiasts, the work is still in its infancy.

For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.

That said, we’re inching ever closer to full-capability artificial-biological hybrid circuits.

“The neurotransmitter-mediated neuromorphic device presented in this work constitutes a fundamental building block for artificial neural networks that can be directly modulated based on biological feedback from live neurons,” the authors concluded. “[It] is a crucial first step in realizing next-generation adaptive biohybrid interfaces.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#437157 A Human-Centric World of Work: Why It ...

Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.

The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.

To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.

Gary Bolles, SU’s chair for the Future of Work, kicked off the discussion with his thoughts on a future of work that’s human-centric, including why it matters and how to build it.

What Is Work?
“Work” seems like a straightforward concept to define, but since it’s constantly shifting shape over time, let’s make sure we’re on the same page. Bolles defined work, very basically, as human skills applied to problems.

“It doesn’t matter if it’s a dirty floor or a complex market entry strategy or a major challenge in the world,” he said. “We as humans create value by applying our skills to solve problems in the world.” You can think of the problems that need solving as the demand and human skills as the supply, and the two are in constant oscillation, including, every few decades or centuries, a massive shift.

We’re in the midst of one of those shifts right now (and we already were, long before the pandemic). Skills that have long been in demand are declining. The World Economic Forum’s 2018 Future of Jobs report listed things like manual dexterity, management of financial and material resources, and quality control and safety awareness as declining skills. Meanwhile, skills the next generation will need include analytical thinking and innovation, emotional intelligence, creativity, and systems analysis.

Along Came a Pandemic
With the outbreak of coronavirus and its spread around the world, the demand side of work shrunk; all the problems that needed solving gave way to the much bigger, more immediate problem of keeping people alive. But as a result, tens of millions of people around the world are out of work—and those are just the ones that are being counted, and they’re a fraction of the true total. There are additional millions in seasonal or gig jobs or who work in informal economies now without work, too.

“This is our opportunity to focus,” Bolles said. “How do we help people re-engage with work? And make it better work, a better economy, and a better set of design heuristics for a world that we all want?”

Bolles posed five key questions—some spurred by impact of the pandemic—on which future of work conversations should focus to make sure it’s a human-centric future.

1. What does an inclusive world of work look like? Rather than seeing our current systems of work as immutable, we need to actually understand those systems and how we want to change them.

2. How can we increase the value of human work? We know that robots and software are going to be fine in the future—but for humans to be fine, we need to design for that very intentionally.

3. How can entrepreneurship help create a better world of work? In many economies the new value that’s created often comes from younger companies; how do we nurture entrepreneurship?

4. What will the intersection of workplace and geography look like? A large percentage of the global workforce is now working from home; what could some of the outcomes of that be? How does gig work fit in?

5. How can we ensure a healthy evolution of work and life? The health and the protection of those at risk is why we shut down our economies, but we need to find a balance that allows people to work while keeping them safe.

Problem-Solving Doesn’t End
The end result these questions are driving towards, and our overarching goal, is maximizing human potential. “If we come up with ways we can continue to do that, we’ll have a much more beneficial future of work,” Bolles said. “We should all be talking about where we can have an impact.”

One small silver lining? We had plenty of problems to solve in the world before ever hearing about coronavirus, and now we have even more. Is the pace of automation accelerating due to the virus? Yes. Are companies finding more ways to automate their processes in order to keep people from getting sick? They are.

But we have a slew of new problems on our hands, and we’re not going to stop needing human skills to solve them (not to mention the new problems that will surely emerge as second- and third-order effects of the shutdowns). If Bolles’ definition of work holds up, we’ve got ours cut out for us.

In an article from April titled The Great Reset, Bolles outlined three phases of the unemployment slump (we’re currently still in the first phase) and what we should be doing to minimize the damage. “The evolution of work is not about what will happen 10 to 20 years from now,” he said. “It’s about what we could be doing differently today.”

Watch Bolles’ talk and those of dozens of other experts for more insights into building a human-centric future of work here.

Image Credit: www_slon_pics from Pixabay Continue reading

Posted in Human Robots

#437150 AI Is Getting More Creative. But Who ...

Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.

Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.

But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.

In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.

The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.

A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”

Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”

Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.

The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.

Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”

In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.

First, programmers need to document their process through online code repositories like GitHub or BitBucket.

Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.

Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.

Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.

AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.

Image Credit: Wikimedia Commons Continue reading

Posted in Human Robots

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots

#437103 How to Make Sense of Uncertainty in a ...

As the internet churns with information about Covid-19, about the virus that causes the disease, and about what we’re supposed to do to fight it, it can be difficult to see the forest for the trees. What can we realistically expect for the rest of 2020? And how do we even know what’s realistic?

Today, humanity’s primary, ideal goal is to eliminate the virus, SARS-CoV-2, and Covid-19. Our second-choice goal is to control virus transmission. Either way, we have three big aims: to save lives, to return to public life, and to keep the economy functioning.

To hit our second-choice goal—and maybe even our primary goal—countries are pursuing five major public health strategies. Note that many of these advances cross-fertilize: for example, advances in virus testing and antibody testing will drive data-based prevention efforts.

Five major public health strategies are underway to bring Covid-19 under control and to contain the spread of SARS-CoV-2.
These strategies arise from things we can control based on the things that we know at any given moment. But what about the things we can’t control and don’t yet know?

The biology of the virus and how it interacts with our bodies is what it is, so we should seek to understand it as thoroughly as possible. How long any immunity gained from prior infection lasts—and indeed whether people develop meaningful immunity at all after infection—are open questions urgently in need of greater clarity. Similarly, right now it’s important to focus on understanding rather than making assumptions about environmental factors like seasonality.

But the biggest question on everyone’s lips is, “When?” When will we see therapeutic progress against Covid-19? And when will life get “back to normal”? There are lots of models out there on the internet; which of those models are right? The simple answer is “none of them.” That’s right—it’s almost certain that every model you’ve seen is wrong in at least one detail, if not all of them. But modeling is meant to be a tool for deeper thinking, a way to run mental (and computational) experiments before—and while—taking action. As George E. P. Box famously wrote in 1976, “All models are wrong, but some are useful.”

Here, we’re seeking useful insights, as opposed to exact predictions, which is why we’re pulling back from quantitative details to get at the mindsets that will support agency and hope. To that end, I’ve been putting together timelines that I believe will yield useful expectations for the next year or two—and asking how optimistic I need to be in order to believe a particular timeline.

For a moderately optimistic scenario to be relevant, breakthroughs in science and technology come at paces expected based on previous efforts and assumptions that turn out to be basically correct; accessibility of those breakthroughs increases at a reasonable pace; regulation achieves its desired effects, without major surprises; and compliance with regulations is reasonably high.

In contrast, if I’m being highly optimistic, breakthroughs in science and technology and their accessibility come more quickly than they ever have before; regulation is evidence-based and successful in the first try or two; and compliance with those regulations is high and uniform. If I’m feeling not-so-optimistic, then I anticipate serious setbacks to breakthroughs and accessibility (with the overturning of many important assumptions), repeated failure of regulations to achieve their desired outcomes, and low compliance with those regulations.

The following scenarios outline the things that need to happen in the fight against Covid-19, when I expect to see them, and how confident I feel in those expectations. They focus on North America and Europe because there are data missing about China’s 2019 outbreak and other regions are still early in their outbreaks. Perhaps the most important thing to keep in mind throughout: We know more today than we did yesterday, but we still have much to learn. New knowledge derived from greater study and debate will almost certainly inspire ongoing course corrections.

As you dive into the scenarios below, practice these three mindset shifts. First, defeating Covid-19 will be a marathon, not a sprint. We shouldn’t expect life to look like 2019 for the next year or two—if ever. As Ed Yong wrote recently in The Atlantic, “There won’t be an obvious moment when everything is under control and regular life can safely resume.” Second, remember that you have important things to do for at least a year. And third, we are all in this together. There is no “us” and “them.” We must all be alert, responsive, generous, and strong throughout 2020 and 2021—and willing to throw away our assumptions when scientific evidence invalidates them.

The Middle Way: Moderate Optimism
Let’s start with the case in which I have the most confidence: moderate optimism.

This timeline considers milestones through late 2021, the earliest that I believe vaccines will become available. The “normal” timeline for developing a vaccine for diseases like seasonal flu is 18 months, which leads to my projection that we could potentially have vaccines as soon as 18 months from the first quarter of 2020. While Melinda Gates agrees with that projection, others (including AI) believe that 3 to 5 years is far more realistic, based on past vaccine development and the need to test safety and efficacy in humans. However, repurposing existing vaccines against other diseases—or piggybacking off clever synthetic platforms—could lead to vaccines being available sooner. I tried to balance these considerations for this moderately optimistic scenario. Either way, deploying vaccines at the end of 2021 is probably much later than you may have been led to believe by the hype engine. Again, if you take away only one message from this article, remember that the fight against Covid-19 is a marathon, not a sprint.

Here, I’ve visualized a moderately optimistic scenario as a baseline. Think of these timelines as living guides, as opposed to exact predictions. There are still many unknowns. More or less optimistic views (see below) and new information could shift these timelines forward or back and change the details of the strategies.
Based on current data, I expect that the first wave of Covid-19 cases (where we are now) will continue to subside in many areas, leading governments to ease restrictions in an effort to get people back to work. We’re already seeing movement in that direction, with a variety of benchmarks and changes at state and country levels around the world. But depending on the details of the changes, easing restrictions will probably cause a second wave of sickness (see Germany and Singapore), which should lead governments to reimpose at least some restrictions.

In tandem, therapeutic efforts will be transitioning from emergency treatments to treatments that have been approved based on safety and efficacy data in clinical trials. In a moderately optimistic scenario, assuming clinical trials currently underway yield at least a few positive results, this shift to mostly approved therapies could happen as early as the third or fourth quarter of this year and continue from there. One approval that should come rather quickly is for plasma therapies, in which the blood from people who have recovered from Covid-19 is used as a source of antibodies for people who are currently sick.

Companies around the world are working on both viral and antibody testing, focusing on speed, accuracy, reliability, and wide accessibility. While these tests are currently being run in hospitals and research laboratories, at-home testing is a critical component of the mass testing we’ll need to keep viral spread in check. These are needed to minimize the impact of asymptomatic cases, test the assumption that infection yields resistance to subsequent infection (and whether it lasts), and construct potential immunity passports if this assumption holds. Testing is also needed for contact tracing efforts to prevent further spread and get people back to public life. Finally, it’s crucial to our fundamental understanding of the biology of SARS-CoV-2 and Covid-19.

We need tests that are very reliable, both in the clinic and at home. So, don’t go buying any at-home test kits just yet, even if you find them online. Wait for reliable test kits and deeper understanding of how a test result translates to everyday realities. If we’re moderately optimistic, in-clinic testing will rapidly expand this quarter and/or next, with the possibility of broadly available, high-quality at-home sampling (and perhaps even analysis) thereafter.

Note that testing is not likely to be a “one-and-done” endeavor, as a person’s infection and immunity status change over time. Expect to be testing yourself—and your family—often as we move later into 2020.

Testing data are also going to inform distancing requirements at the country and local levels. In this scenario, restrictions—at some level of stringency—could persist at least through the end of 2020, as most countries are way behind the curve on testing (Iceland is an informative exception). Governments will likely continue to ask citizens to work from home if at all possible; to wear masks or face coverings in public; to employ heightened hygiene and social distancing in workplaces; and to restrict travel and social gatherings. So while it’s likely we’ll be eating in local restaurants again in 2020 in this scenario, at least for a little while, it’s not likely we’ll be heading to big concerts any time soon.

The Extremes: High and Low Optimism
How would high and low levels of optimism change our moderately optimistic timeline? The milestones are the same, but the time required to achieve them is shorter or longer, respectively. Quantifying these shifts is less important than acknowledging and incorporating a range of possibilities into our view. It pays to pay attention to our bias. Here are a few examples of reasonable possibilities that could shift the moderately optimistic timeline.

When vaccines become available
Vaccine repurposing could shorten the time for vaccines to become available; today, many vaccine candidates are in various stages of testing. On the other hand, difficulties in manufacture and distribution, or faster-than-expected mutation of SARS-CoV-2, could slow vaccine development. Given what we know now, I am not strongly concerned about either of these possibilities—drug companies are rapidly expanding their capabilities, and viral mutation isn’t an urgent concern at this time based on sequencing data—but they could happen.

At first, governments will likely supply vaccines to essential workers such as healthcare workers, but it is essential that vaccines become widely available around the world as quickly and as safely as possible. Overall, I suggest a dose of skepticism when reading highly optimistic claims about a vaccine (or multiple vaccines) being available in 2020. Remember, a vaccine is a knockout punch, not a first line of defense for an outbreak.

When testing hits its stride
While I am confident that testing is a critical component of our response to Covid-19, reliability is incredibly important to testing for SARS-CoV-2 and for immunity to the disease, particularly at home. For an individual, a false negative (being told you don’t have antibodies when you really do) could be just as bad as a false positive (being told you do have antibodies when you really don’t). Those errors are compounded when governments are trying to make evidence-based policies for social and physical distancing.

If you’re highly optimistic, high-quality testing will ramp up quickly as companies and scientists innovate rapidly by cleverly combining multiple test modalities, digital signals, and cutting-edge tech like CRISPR. Pop-up testing labs could also take some pressure off hospitals and clinics.

If things don’t go well, reliability issues could hinder testing, manufacturing bottlenecks could limit availability, and both could hamstring efforts to control spread and ease restrictions. And if it turns out that immunity to Covid-19 isn’t working the way we assumed, then we must revisit our assumptions about our path(s) back to public life, as well as our vaccine-development strategies.

How quickly safe and effective treatments appear
Drug development is known to be long, costly, and fraught with failure. It’s not uncommon to see hope in a drug spike early only to be dashed later on down the road. With that in mind, the number of treatments currently under investigation is astonishing, as is the speed through which they’re proceeding through testing. Breakthroughs in a therapeutic area—for example in treating the seriously ill or in reducing viral spread after an infection takes hold—could motivate changes in the focus of distancing regulations.

While speed will save lives, we cannot overlook the importance of knowing a treatment’s efficacy (does it work against Covid-19?) and safety (does it make you sick in a different, or worse, way?). Repurposing drugs that have already been tested for other diseases is speeding innovation here, as is artificial intelligence.

Remarkable collaborations among governments and companies, large and small, are driving innovation in therapeutics and devices such as ventilators for treating the sick.

Whether government policies are effective and responsive
Those of us who have experienced lockdown are eager for it to be over. Businesses, economists, and governments are also eager to relieve the terrible pressure that is being exerted on the global economy. However, lifting restrictions will almost certainly lead to a resurgence in sickness.

Here, the future is hard to model because there are many, many factors at play, and at play differently in different places—including the extent to which individuals actually comply with regulations.

Reliable testing—both in the clinic and at home—is crucial to designing and implementing restrictions, monitoring their effectiveness, and updating them; delays in reliable testing could seriously hamper this design cycle. Lack of trust in governments and/or companies could also suppress uptake. That said, systems are already in place for contact tracing in East Asia. Other governments could learn important lessons, but must also earn—and keep—their citizens’ trust.

Expect to see restrictions descend and then lift in response to changes in the number of Covid-19 cases and in the effectiveness of our prevention strategies. Also expect country-specific and perhaps even area-specific responses that differ from each other. The benefit of this approach? Governments around the world are running perhaps hundreds of real-time experiments and design cycles in balancing health and the economy, and we can learn from the results.

A Way Out
As Jeremy Farrar, head of the Wellcome Trust, told Science magazine, “Science is the exit strategy.” Some of our greatest technological assistance is coming from artificial intelligence, digital tools for collaboration, and advances in biotechnology.

Our exit strategy also needs to include empathy and future visioning—because in the midst of this crisis, we are breaking ground for a new, post-Covid future.

What do we want that future to look like? How will the hard choices we make now about data ethics impact the future of surveillance? Will we continue to embrace inclusiveness and mass collaboration? Perhaps most importantly, will we lay the foundation for successfully confronting future challenges? Whether we’re thinking about the next pandemic (and there will be others) or the cascade of catastrophes that climate change is bringing ever closer—it’s important to remember that we all have the power to become agents of that change.

Special thanks to Ola Kowalewski and Jason Dorrier for significant conversations.

Image Credit: Drew Beamer / Unsplash Continue reading

Posted in Human Robots